View Item 
      •   IPB Repository
      • Dissertations and Theses
      • Undergraduate Theses
      • UT - Faculty of Mathematics and Natural Sciences
      • UT - Computer Science
      • View Item
      •   IPB Repository
      • Dissertations and Theses
      • Undergraduate Theses
      • UT - Faculty of Mathematics and Natural Sciences
      • UT - Computer Science
      • View Item
      JavaScript is disabled for your browser. Some features of this site may not work without it.

      Pengenalan kata berbasiskan fonem dengan pemodelan resilient backpropagation

      Thumbnail
      View/Open
      Full Text (974.3Kb)
      Abstract (276.9Kb)
      BAB I (280.8Kb)
      BAB II (559.3Kb)
      BAB III (382.7Kb)
      BAB IV (403.7Kb)
      BAB V (281.2Kb)
      Cover (281.9Kb)
      Daftar Pustaka (274.5Kb)
      Lampiran (406.1Kb)
      Date
      2011
      Author
      Prameswari
      Buono, Agus
      Metadata
      Show full item record
      Abstract
      The aim of this research is to know the performance of Neural Network as a model for word recognition. The research uses Resilient Backpropagation for modeling and Mel-Frequency Cepstral Coefficient (MFCC) for feature extraction. The voice data used comes from one speaker. Total words used are 70 words that consist of 50 words as a dictionary of words and 20 words, that each word repeated 10 times. Dictionary words consist of 50 words from a combination of phonemes used in the research. Phonemes used in this research are 10 phonemes consist of 4 vowels and 6 consonants phonemes. From the 20 words, each word is repeated 10 times, 7 times used as training data and 3 times are used as test data. The output from testing process are word transcription. The convertion process from the word transcription into word is done manually by 5 person. This research produces two models. Average accuracy obtained from model with 100 hidden neuron is 75% for test data and 61% for the dictionary of words. The best average accuracy obtained is 93% for test data and 62 % for the dictionary of words generated by testing the model with 1000 hidden neurons. Overfitting occurs in the second model with 1000 hidden neurons. It causes the model can only produce good output for data that has been trained.
      URI
      http://repository.ipb.ac.id/handle/123456789/47409
      Collections
      • UT - Computer Science [2482]

      Copyright © 2020 Library of IPB University
      All rights reserved
      Contact Us | Send Feedback
      Indonesia DSpace Group 
      IPB University Scientific Repository
      UIN Syarif Hidayatullah Institutional Repository
      Universitas Jember Digital Repository
        

       

      Browse

      All of IPB RepositoryCollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

      My Account

      Login

      Application

      google store

      Copyright © 2020 Library of IPB University
      All rights reserved
      Contact Us | Send Feedback
      Indonesia DSpace Group 
      IPB University Scientific Repository
      UIN Syarif Hidayatullah Institutional Repository
      Universitas Jember Digital Repository