Mostrar el registro sencillo del ítem

dc.contributor.advisorCerón Correa, Alexander
dc.contributor.authorGómez Alvarado, Diego Felipe
dc.date.accessioned2021-08-12T16:54:05Z
dc.date.available2021-08-12T16:54:05Z
dc.date.issued2021-02-25
dc.identifier.urihttp://hdl.handle.net/10654/38471
dc.description.abstractEl presente documento expone el fundamento, desarrollo y resultados obtenidos durante el proceso de entrenamiento y evaluación, de los diferentes modelos computacionales para el reconocimiento de objetos, que utilizan como pilar fundamental las redes neuronales convolucionales. El trabajo tuvo como objetivos principales, la recolección del conjunto de entrenamiento, implementación y pruebas de rendimiento. Se realizó un proceso evaluativo para diez arquitecturas y/o métodos para el reconocimiento de objetos, seis con el TensorFlow Object Detection API y cuatro usando el framework Darknet. Eso con el fin de seleccionar el modelo con mejor proceso operativo, dado parámetros concernientes a la precisión, velocidad y demanda de recursos. La recolección de las imágenes para el conjunto de datos, tomó lugar en las instalaciones de la Universidad Militar Nueva Granada, a través de la toma de vídeos y fotografías, las cuales fueron manualmente etiquetadas y posteriormente utilizadas para el proceso de entrenamiento para cada uno de los diez modelos/métodos utilizados, bajo dos marcos de trabajo diferentes. El documento se encuentra dividido en seis capítulos principales, que brindan la introducción y naturaleza del proyecto, estado del arte, fundamento teórico, desarrollo, resultados y consideraciones finales.spa
dc.format.mimetypeapplicaction/pdfspa
dc.language.isospaspa
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.titleReconocimiento de objetos utilizando técnicas de aprendizaje profundospa
dc.rights.accessrightsinfo:eu-repo/semantics/openAccessspa
dc.subject.lembREDES NEURALES (COMPUTADORES)spa
dc.subject.lembPROGRAMACION ORIENTADA A OBJETOS (COMPUTACION)spa
dc.type.localTesis/Trabajo de grado - Monografía - Pregradospa
dc.description.abstractenglishThis document presents the theory, development and results obtained during the training and evaluation process of the different computational models for object recognition, which use convolutional neural networks as the main principle. The project had several main objectives, incluiding the collection of the training set, implementation and performance tests. An evaluation process was carried out for ten architectures and/or methods for object recognition, six with the TensorFlow Object Detection API and four with the Darknet framework. This in order to select the best model in terms of operating process, given certain parameters concerning precision, speed and resource demand. The collection of the images for the training dataset, took place in the facilities of the Nueva Granada Military University, through the taking of videos and photographs, which were manually labeled and later used in the training process for each of the ten models/methods under two different working environments. The document is divided into six main chapters concerning the introduction and nature of the project, state of the art, theorical foundation, development, results and final considerations.spa
dc.title.translatedObject recognition using deep learning techniquesspa
dc.subject.keywordsconvolutional neural networksspa
dc.subject.keywordsdeep learningspa
dc.subject.keywordslayersspa
dc.subject.keywordsobject recognitionspa
dc.subject.keywordstrainingspa
dc.publisher.programIngeniería Multimediaspa
dc.creator.degreenameIngeniero Multimediaspa
dc.description.degreelevelPregradospa
dc.publisher.facultyFacultad de Ingenieríaspa
dc.type.driverinfo:eu-repo/semantics/bachelorThesisspa
dc.rights.creativecommonsAttribution-NonCommercial-NoDerivatives 4.0 Internationalspa
dc.relation.referencesA. Andreopoulos y J. K. Tsotsos, “50 Years of object recognition: Directions forward,” Computer Vision and Image Understanding, vol. 117, n° 8, págs. 827-891, 2013, issn: 1077-3142. doi: https://doi.org/10.1016/j.cviu.2013.04.005. dirección: http://www.sciencedirect.com/science/article/pii/S107731421300091Xspa
dc.relation.referencesY. Dingyi, W. Haiyan e Y. Kaiming, “State-of-the-art and trends of autonomous driving technology,” en 2018 IEEE International Symposium on Innovation and Entrepreneurship (TEMS-ISIE), mar. de 2018, págs. 1-8. doi: https://doi.org/10.1109/TEMS-ISIE.2018.8478449spa
dc.relation.referencesM. Bansal, M. Kumar y M. Kumar, “2D Object Recognition Techniques: State-of- the-Art Work,” Archives of Computational Methods in Engineering, feb. de 2020, issn: 1886-1784. doi: 10.1007/s11831-020-09409-1. dirección: https://doi-org.ezproxy.umng.edu.co/10.1007/s11831-020-09409-1spa
dc.relation.referencesS. Manzoor, S. Joo y T. Kuc, “Comparison of Object Recognition Approaches using Traditional Machine Vision and Modern Deep Learning Techniques for Mobile Robot,” en 2019 19th International Conference on Control, Automation and Systems (ICCAS), oct. de 2019, págs. 1316-1321. doi: 10.23919/ICCAS47443.2019.8971680spa
dc.relation.referencesP. Viola y M. Jones, “Rapid object detection using a boosted cascade of simple features,” en Proceedings of the 2001 IEEE Computer Society Conference on Com- puter Vision and Pattern Recognition. CVPR 2001, vol. 1, dic. de 2001, págs. I-I. doi: 10.1109/CVPR.2001.990517spa
dc.relation.referencesJ. Wang, Y. Ma, L. Zhang, R. X. Gao y D. Wu, “Deep learning for smart manufacturing: Methods and applications,” Journal of Manufacturing Systems, vol. 48, págs. 144-156, 2018, Special Issue on Smart Manufacturing, issn: 0278-6125. doi: https://doi.org/10.1016/j.jmsy.2018.01.003. dirección: http://www.sciencedirect.com/science/article/pii/S0278612518300037spa
dc.relation.referencesM. Gheisari, G. Wang y M. Z. A. Bhuiyan, “A Survey on Deep Learning in Big Data,” en 2017 IEEE International Conference on Computational Science and Engineering (CSE) and IEEE International Conference on Embedded and Ubiquitous Computing (EUC), vol. 2, jul. de 2017, págs. 173-180. doi: 10.1109/CSE-EUC.2017.215spa
dc.relation.referencesS. Dargan, M. Kumar, M. R. Ayyagari y G. Kumar, “A Survey of Deep Learning and Its Applications: A New Paradigm to Machine Learning,” Archives of Computational Methods in Engineering, vol. 27, n° 4, págs. 1071-1092, sep. de 2020, issn: 1886-1784. doi: 10.1007/s11831-019-09344-w. dirección: https://doi-org.ezproxy.umng.edu.co/10.1007/s11831-019-09344-wspa
dc.relation.referencesP. Bezak, “Building recognition system based on deep learning,” en 2016 Third International Conference on Artificial Intelligence and Pattern Recognition (AIPR), sep. de 2016, págs. 1-5. doi: 10.1109/ICAIPR.2016.7585230spa
dc.relation.referencesL. Hui-bin, W. Fei, C. Qiang y P. Yong, “Recognition of individual object in focus people group based on deep learning,” en 2016 International Conference on Audio, Language and Image Processing (ICALIP), jul. de 2016, págs. 615-619. doi: 10.1109/ICALIP.2016.7846607spa
dc.relation.referencesY. Sakai, T. Oda, M. Ikeda y L. Barolli, “A Vegetable Category Recognition System Using Deep Neural Network,” en 2016 10th International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS), jul. de 2016, págs. 189-192. doi: 10.1109/IMIS.2016.84spa
dc.relation.referencesX. Ding, Y. Luo, Q. Yu, Q. Li, Y. Cheng, R. Munnoch, D. Xue y G. Cai, “Indoor object recognition using pre-trained convolutional neural network,” en 2017 23rd International Conference on Automation and Computing (ICAC), sep. de 2017, págs. 1-6. doi: 10.23919/IConAC.2017.8081986spa
dc.relation.referencesB. Tian, L. Li, Y. Qu y L. Yan, “Video Object Detection for Tractability with Deep Learning Method,” en 2017 Fifth International Conference on Advanced Cloud and Big Data (CBD), ago. de 2017, págs. 397-401. doi: 10.1109/CBD.2017.75spa
dc.relation.referencesS. Caraiman, A. Morar, M. Owczarek, A. Burlacu, D. Rzeszotarski, N. Botezatu, P. Herghelegiu, F. Moldoveanu, P. Strumillo y A. Moldoveanu, “Computer Vision for the Visually Impaired: the Sound of Vision System,” en 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), oct. de 2017, págs. 1480-1489. doi: 10.1109/ICCVW.2017.175spa
dc.relation.referencesC. Li, Y. Zhang e Y. Qu, “Object detection based on deep learning of small samples,” en 2018 Tenth International Conference on Advanced Computational Intelligence (ICACI), mar. de 2018, págs. 449-454. doi: 10.1109/ICACI.2018.8377501spa
dc.relation.referencesR. Jiménez Moreno, O. F. Avilés Sánchez y D. M. Ovalle Martínez, “Red neuronal convolucional para discriminar herramientas en robótica asistencial,” Visión electrónica, vol. 12, n° 2, págs. 8-8, 2018, issn: 1909-9746. dirección: https://dialnet.unirioja.es/descarga/articulo/6747028.pdfspa
dc.relation.referencesS. U. Habiba, M. K. Islam y S. M. M. Ahsan, “Bangladeshi Plant Recognition using Deep Learning based Leaf classification,” en 2019 International Conference on Computer, Communication, Chemical, Materials and Electronic Engineering (IC4ME2), jul. de 2019, págs. 1-4. doi: 10.1109/IC4ME247184.2019.9036515spa
dc.relation.referencesH. Ali, M. Khursheed, S. K. Fatima, S. M. Shuja y S. Noor, “Object Recognition for Dental Instruments Using SSD-MobileNet,” en 2019 International Conference on Information Science and Communication Technology (ICISCT), mar. de 2019, págs. 1-6. doi: 10.1109/CISCT.2019.8777441spa
dc.relation.referencesR. Sarić, M. Ulbricht, M. Krstić, J. Kevrić y D. Jokić, “Recognition of Objects in the Urban Environment using R-CNN and YOLO Deep Learning Algorithms,” en 2020 9th Mediterranean Conference on Embedded Computing (MECO), jun. de 2020, págs. 1-4. doi: 10.1109/MECO49872.2020.9134080spa
dc.relation.referencesJ. Guerrero-Viu, C. Fernandez-Labrador, C. Demonceaux y J. J. Guerrero, “What’s in my Room - Object Recognition on Indoor Panoramic Images,” en 2020 IEEE International Conference on Robotics and Automation (ICRA), mayo de 2020, págs. 567-573. doi: 10.1109/ICRA40945.2020.9197335spa
dc.relation.referencesB. S. Alfonso, “Aplicación de aprendizaje profundo en la identificación de obstáculos en el trayecto de vehículos,” Trabajo de grado para pregrado, Universidad Militar Nueva Granada, Facultad de Ingeniería, Programa de pregrado - Ingeniería en Mecatrónica, 2018. dirección: http://hdl.handle.net/10654/17651spa
dc.relation.referencesJ. O. Pinzón, “Algoritmo de operación para robot asistencial autónomo enfocado a alimentación,” Tesis de maestría, Universidad Militar Nueva Granada, Facultad de Ingeniería, Programa de maestría - Ingeniería en Mecatrónica, 2019. dirección: http://hdl.handle.net/10654/32762spa
dc.relation.referencesJ. J. Vogulys, “Desarrollo de un asistente de conducción longitudinal mediante un Algoritmo de Aprendizaje Profundo,” Tesis de maestría, Universidad Militar Nueva Granada, Facultad de Ingeniería, Programa de maestría - Ingeniería en Mecatrónica, 2020. dirección: http://hdl.handle.net/10654/35691spa
dc.relation.referencesD. G. Fisher, Robotics: New Research, ép. Robotics Research and Technology. Hauppauge, Nueva York, Estados Unidos de América: Nova Science Publishers, Inc, 2016, isbn: 9781634859677spa
dc.relation.referencesU. Shimon, High-Level Vision: Object Recognition and Visual Cognition. Cambridge, Massachusetts, Estados Unidos de América y Londres, Inglaterra, Reino Unido: A Bradford Book, The MIT Press, 1996, isbn: 9780262210133spa
dc.relation.referencesG. G. Calvo, P. Joshi y N. Yellavula, OpenCV 3x with Python by Example, 2.a ed. Packt Publishing, Limited, 2018, versión electrónica en ProQuest Ebook Central, isbn: 9781788396769spa
dc.relation.referencesA. Yali, 2D Object Detection and Recognition : Models, Algorithms, and Networks. The MIT Press, 2002, isbn: 9780262011945spa
dc.relation.referencesB. Cyganek, Object Detection and Recognition in Digital Images: Theory and Practice. John Wiley & Sons, Incorporated, 2013, versión electrónica en ProQuest Ebook Central, isbn: 9781118618370spa
dc.relation.referencesD. Graupe, Principles of Artificial Neural Networks, 3.a ed. Singapúr: World Scientific Publishing Co Pte Ltd, 2013, versión electrónica en ProQuest Ebook Central, isbn: 9789814522748spa
dc.relation.referencesJ. A. Flores, Focus on Artificial Neural Networks. Nova Science Publishers, Incorporated, 2011, versión electrónica en ProQuest Ebook Central, isbn: 9781619421004spa
dc.relation.referencesI. N. da Silva, D. H. Spatti, R. A. Flauzino, L. H. B. Liboni y S. F. dos Reis Alves, Artificial Neural Networks: A Practical Course. Cham, Suiza: Springer International Publishing, 2017, isbn: 978-3-319-43162-8. doi: 10.1007/978-3-319-43162-8spa
dc.relation.referencesC. C. Aggarwal, Neural Networks and Deep Learning. Cham, Suiza: Springer International Publishing, 2018, isbn: 978-3-319-94463-0. dirección: https://doi-org.ezproxy.umng.edu.co/10.1007/978-3-319-94463-0spa
dc.relation.referencesA. Voulodimos, N. Doulamis, A. Doulamis y E. Protopapadakis, “Deep Learning for Computer Vision: A Brief Review,” Computational Intelligence & Neuroscience, págs. 1-13, 2018, issn: 16875265spa
dc.relation.referencesY. LeCun, Y. Bengio y G. Hinton, “Deep learning,” Nature, vol. 521, págs. 436-444, mayo de 2015, issn: 1476-4687. doi: 10.1038/nature14539. dirección: https://doi-org.ezproxy.umng.edu.co/10.1038/nature14539spa
dc.relation.referencesX. Du, Y. Cai, S. Wang y L. Zhang, “Overview of deep learning,” en 2016 31st Youth Academic Annual Conference of Chinese Association of Automation (YAC), nov. de 2016, págs. 159-164. doi: 10.1109/YAC.2016.7804882spa
dc.relation.referencesW. A. Adkins y M. G. Davidson, Ordinary Differential Equations. Nueva York, NY, Estados Unidos de América: Springer Science+Business Media, 2012, isbn: 978-1-4614-3618-8. doi: 10.1007/978-1-4614-3618-8. dirección: https://doi-org.ezproxy.umng.edu.co/10.1007/978-1-4614-3618-8spa
dc.relation.referencesI. Goodfellow, Y. Bengio y A. Courville, Deep Learning. MIT Press, 2016, http://www.deeplearningbook.org, última visita: 2020-11-30spa
dc.relation.referencesR. Yamashita, M. Nishio, R. K. G. Do y K. Togashi, “Convolutional neural networks: an overview and application in radiology,” Insights into Imaging, vol. 9, págs. 611-629, ago. de 2018, issn: 1869-4101. doi: 10.1007/s13244-018-0639-9. dirección: https://doi-org.ezproxy.umng.edu.co/10.1007/s13244-018-0639-9spa
dc.relation.referencesU. Michelucci, Applied Deep Learning. Berkeley, California, Estados Unidos de América: Apress, 2018, isbn: 978-1-4842-3790-8. dirección: https://doi-org.ezproxy.umng.edu.co/10.1007/978-1-4842-3790-8spa
dc.relation.referencesY. Lecun, L. Bottou, Y. Bengio y P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, n.o 11, págs. 2278-2324, nov. de 1998, issn: 1558-2256. doi: 10.1109/5.726791spa
dc.relation.referencesA. Krizhevsky, I. Sutskever y G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks.,” Communications of the ACM, vol. 60, n° 6, págs. 84-90, 2017, issn: 00010782. dirección: https://ezproxy.umng.edu.co/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=bsu& AN=123446102&lang=es&site=eds-livespa
dc.relation.referencesM. D. Zeiler y R. Fergus, “Visualizing and Understanding Convolutional Networks,” en Computer Vision – ECCV 2014, D. Fleet, T. Pajdla, B. Schiele y T. Tuytelaars, eds., Cham, Suiza: Springer International Publishing, 2014, págs. 818-833, isbn: 978-3-319-10590-1spa
dc.relation.referencesK. Simonyan y A. Zisserman, Very Deep Convolutional Networks for Large-Scale Image Recognition, 2014. arXiv: 1409.1556 [cs.CV]spa
dc.relation.referencesC. Szegedy, Wei Liu, Yangqing Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke y A. Rabinovich, “Going deeper with convolutions,” en 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), jun. de 2015, págs. 1-9. doi: 10.1109/CVPR.2015.7298594spa
dc.relation.referencesK. He, X. Zhang, S. Ren y J. Sun, “Deep Residual Learning for Image Recognition,” en 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), jun. de 2016, págs. 770-778. doi: 10.1109/CVPR.2016.90spa
dc.relation.referencesA. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto y H. Adam, “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” CoRR, vol. abs/1704.04861, 2017. arXiv: 1704.04861. dirección: http://arxiv.org/abs/1704.04861spa
dc.relation.referencesR. Girshick, J. Donahue, T. Darrell y J. Malik, “Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation,” en 2014 IEEE Conference on Computer Vision and Pattern Recognition, jun. de 2014, págs. 580-587. doi: 10.1109/CVPR.2014.81spa
dc.relation.referencesJ. Uijlings, K. Sande, T. Gevers y A. Smeulders, “Selective Search for Object Recognition.,” International Journal of Computer Vision, vol. 104, n° 2, págs. 154-171, 2013, issn: 09205691. dirección: https://ezproxy.umng.edu.co/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=asn&AN=89397633&lang=es&site=eds-livespa
dc.relation.referencesR. Girshick, “Fast R-CNN,” en 2015 IEEE International Conference on Computer Vision (ICCV), dic. de 2015, págs. 1440-1448. doi: 10.1109/ICCV.2015.169spa
dc.relation.referencesS. Ren, K. He, R. Girshick y J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” 6, vol. 39, jun. de 2017, págs. 1137-1149. doi: 10.1109/TPAMI.2016.2577031spa
dc.relation.referencesW. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu y A. C. Berg, “SSD: Single Shot MultiBox Detector,” en Computer Vision – ECCV 2016, B. Leibe, J. Matas, N. Sebe y M. Welling, eds., Cham, Suiza: Springer International Publishing, 2016, págs. 21-37, isbn: 978-3-319-46448-0spa
dc.relation.referencesJ. Redmon, S. Divvala, R. Girshick y A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” págs. 779-788, jun. de 2016, issn: 1063-6919. doi: 10.1109/CVPR.2016.91spa
dc.relation.referencesJ. Redmon y A. Farhadi, “YOLO9000: Better, Faster, Stronger,” arXiv e-prints, arXiv:1612.08242, arXiv:1612.08242, dic. de 2016. arXiv: 1612.08242 [cs.CV]spa
dc.relation.referencesJ. Redmon y A. Farhadi, “YOLOv3: An Incremental Improvement,” arXiv e-prints, arXiv:1804.02767, arXiv:1804.02767, abr. de 2018. arXiv: 1804.02767spa
dc.relation.referencesP. Adarsh, P. Rathi y M. Kumar, “YOLO v3-Tiny: Object Detection and Recognition using one stage improved model,” en 2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS), mar. de 2020, págs. 687-694. doi: 10.1109/ICACCS48705.2020.9074315spa
dc.relation.referencesA. Bochkovskiy, C.-Y. Wang y H.-Y. M. Liao, “YOLOv4: Optimal Speed and Accuracy of Object Detection,” arXiv e-prints, arXiv:2004.10934, arXiv:2004.10934, abr. de 2020. arXiv: 2004.10934 [cs.CV]spa
dc.relation.referencesZ. Jiang, L. Zhao, S. Li e Y. Jia, “Real-time object detection method based on improved YOLOv4-tiny,” arXiv e-prints, arXiv:2011.04244, arXiv:2011.04244, nov. de 2020. arXiv: 2011.04244 [cs.CV]spa
dc.relation.referencesK. He, G. Gkioxari, P. Dollár y R. Girshick, “Mask R-CNN,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, n.o 2, págs. 386-397, feb. de 2020, issn: 1939-3539. doi: 10.1109/TPAMI.2018.2844175spa
dc.relation.referencesM. J. Shafiee, B. Chywl, F. Li y A. Wong, “Fast YOLO: A Fast You Only Look Once System for Real-time Embedded Object Detection in Video,” CoRR, vol. abs/1709.05943, 2017. arXiv: 1709.05943. dirección: http://arxiv.org/abs/1709. 05943spa
dc.relation.referencesC. Shorten y T. M. Khoshgoftaar, “A survey on Image Data Augmentation for Deep Learning,” Journal of Big Data, vol. 6, pág. 60, jul. de 2019, issn: 2196-1115. doi: 10.1186/s40537-019-0197-0. dirección: https://doi-org.ezproxy.umng.edu.co/10.1186/s40537-019-0197-0spa
dc.relation.references“Batch Learning,” en Encyclopedia of Machine Learning, C. Sammut y G. I. Webb, eds. Boston, MA: Springer US, 2010, págs. 74-74, isbn: 978-0-387-30164-8. doi: 10.1007/978-0-387-30164-8_58. dirección: https://doi.org/10.1007/978-0- 387-30164-8_58spa
dc.relation.referencesJ. Huang, V. Rathod, C. Sun, M. Zhu, A. Korattikara, A. Fathi, I. Fischer, Z. Wojna, Y. Song, S. Guadarrama y K. Murphy, “Speed/Accuracy Trade-Offs for Modern Convolutional Object Detectors,” págs. 3296-3297, jul. de 2017, issn: 1063-6919. doi: 10.1109/CVPR.2017.351spa
dc.relation.referencesJ. Redmon, Darknet: Open Source Neural Networks in C, http://pjreddie.com/darknet/, 2013-2016spa
dc.relation.referencesM. Everingham, L. Van Gool, C. K. I. Williams, J. Winn y A. Zisserman, “The Pascal Visual Object Classes (VOC) Challenge,” International Journal of Computer Vision, vol. 88, págs. 303-338, jun. de 2010, issn: 1573-1405. doi: 10.1007/s11263-009-0275-4. dirección: https://doi-org.ezproxy.umng.edu.co/10.1007/s11263-009-0275-4spa
dc.relation.referencesD. M. W. Powers, “Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation,” arXiv e-prints, arXiv:2010.16061, arXiv:2010.16061, oct. de 2020. arXiv: 2010.16061 [cs.LG]spa
dc.relation.referencesC. Goutte y E. Gaussier, “A Probabilistic Interpretation of Precision, Recall and F-Score, with Implication for Evaluation,” en Advances in Information Retrieval, D. E. Losada y J. M. Fernández-Luna, eds., Berlin, Heidelberg, Alemania: Springer Berlin Heidelberg, 2005, págs. 345-359, isbn: 978-3-540-31865-1. dirección: https://doi-org.ezproxy.umng.edu.co/10.1007/978-3-540-31865-1_25spa
dc.relation.referencesT. Skopal y P. Moravec, “Modified LSI Model for Efficient Search by Metric Access Methods,” en Advances in Information Retrieval, J. M. Losada David E.and Fernández-Luna, ed., Berlin, Heidelberg, Alemania: Springer Berlin Heidelberg, 2005, págs. 245-259, isbn: 978-3-540-31865-1. dirección: https://doi-org.ezproxy.umng.edu.co/10.1007/978-3-540-31865-1_18spa
dc.relation.referencesS. M. Beitzel, E. C. Jensen y O. Frieder, “MAP,” en Encyclopedia of Database Systems, L. LIU y M. T. ÖZSU, eds. Boston, MA: Springer US, 2009, págs. 1691-1692, isbn: 978-0-387-39940-9. doi: 10.1007/978-0-387-39940-9_492. dirección: https://doi.org/10.1007/978-0-387-39940-9_492spa
dc.subject.proposalaprendizaje profundospa
dc.subject.proposalcapasspa
dc.subject.proposalentrenamientospa
dc.subject.proposalreconocimiento de objetosspa
dc.subject.proposalredes neuronales convolucionalesspa
dc.publisher.grantorUniversidad Militar Nueva Granadaspa
dc.type.coarhttp://purl.org/coar/resource_type/c_7a1f*
dc.type.hasversioninfo:eu-repo/semantics/acceptedVersionspa
dc.identifier.instnameinstname:Universidad Militar Nueva Granadaspa
dc.identifier.reponamereponame:Repositorio Institucional Universidad Militar Nueva Granadaspa
dc.identifier.repourlrepourl:https://repository.unimilitar.edu.cospa
dc.rights.localAcceso abiertospa
dc.coverage.sedeCalle 100spa
dc.rights.coarhttp://purl.org/coar/access_right/c_abf2


Archivos en el ítem

Thumbnail

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

http://creativecommons.org/licenses/by-nc-nd/4.0/
Excepto si se señala otra cosa, la licencia del ítem se describe como http://creativecommons.org/licenses/by-nc-nd/4.0/