Refêrencias

[1] L. Velho, A. C. Frery, e J. Gomes, Image processing for computer graphics and vision, 2º ed. London: Springer, 2009.

[2] R. C. Gonzalez e R. C. Woods, Processamento digital de imagens, 3º ed. São Paulo: Pearson Prentice Hall, 2010.

[3] H. Pedrini e W. Robson Schwartz, Analise de imagens digitais: principios, algoritmos e aplicações, 3º ed. São Paulo: Thomson Learning Edicoes Ltda, 2007.

[4] T. B. Moeslund, Introduction to video and image processing: Building real systems and applications. Springer Science & Business Media, 2012.

[5] B. H. A. I. Magazine, “Esquema de uma rede multicamda de perceptrons”. 2019, [Online]. Disponível em: https://miro.medium.com/max/533/0*vl2T7-m8O3-_2hhm.jpg.

[6] B. E. da cor, “Imagem de um espectro eletromagnetico”. 2014, [Online]. Disponível em: https://estudodacor.files.wordpress.com/2014/08/espectro-eletromagnetico1.jpg.

[7] W. Commons, “Crab nebula in multiple wavelengths”. [Online]. Disponível em: https://commons.wikimedia.org/wiki/File:Crab_Nebula_in_Multiple_Wavelengths.png.

[8] I. C. Cuthill et al., “The biology of color”, Science, vol. 357, nº 6350, 2017.

[9] W. Burger, M. J. Burge, M. J. Burge, e M. J. Burge, Principles of digital image processing, vol. 111. Springer, 2009.

[10] U. Teubner e H. J. Bruckner, Optical Imaging and Photography: Introduction to Science and Technology of Optics, Sensors and Systems. Walter de Gruyter GmbH & Co KG, 2019.

[11] W. Commons, “Bayer pattern on sensor profile”. 2006, [Online]. Disponível em: https://commons.wikimedia.org/wiki/File:Bayer_pattern_on_sensor_profile.svg.

[12] A. C. Bovik, The essential guide to image processing. Academic Press, 2009.

[13] E. Azevedo, A. Conci, e C. Vasconcelos, Computação grafica: Teoria e pratica: geração de imagens, vol. 1. Elsevier Brasil, 2018.

[14] W. Commons, “Kernel (image processing)”. 2011, [Online]. Disponível em: https://en.wikipedia.org/wiki/Kernel_(image_processing).

[15] T. e2eML Course Catalog, “Convolution in one dimension for neural networks”. 2020, [Online]. Disponível em: https://e2eml.school/convolution_one_d.html.

[16] R. CHITYALA e S. PUDIPEDDI, Image processing and acquisition using Python. CRC Press, 2020.

[17] J. Canny, “A computational approach to edge detection”, IEEE Transactions on pattern analysis and machine intelligence, nº 6, p. 679–698, 1986.

[18] M. Sonka, V. Hlavac, e R. Boyle, Image processing, analysis, and machine vision. Cengage Learning, 2014.

[19] M. Nixon e A. Aguado, Feature extraction and image processing for computer vision. Academic press, 2019.

[20] Nasa, “Hubble Ultra-Deep Field”. 2004, [Online]. Disponível em: https://imgsrc.hubblesite.org/hu/db/images/hs-2014-27-a-full_jpg.jpg.

[21] H. Bay, T. Tuytelaars, e L. Van Gool, “Surf: Speeded up robust features”, p. 404–417, 2006.

[22] K. Cen, “Study of Viola-Jones Real Time Face Detector”, 2016.

[23] I. Goodfellow, Y. Bengio, e A. Courville, Deep Learning. MIT Press, 2016.

[24] A. Chiang, “IBM Deep Blue at Computer History Museum”. 2020, [Online]. Disponível em: https://commons.wikimedia.org/wiki/File:IBM_Deep_Blue_at_Computer_History_Museum_(9361685537).jpg.

[25] S. R. y Cajal, Comparative study of the sensory areas of the human cortex. Clark University, 1899.

[26] W. S. McCulloch e W. Pitts, “A logical calculus of the ideas immanent in nervous activity”, The bulletin of mathematical biophysics, vol. 5, nº 4, p. 115–133, 1943.

[27] Glosser.ca, “Artificial neural network with layer coloring”. 2013, [Online]. Disponível em: https://commons.wikimedia.org/wiki/File:Colored_neural_network.svg.

[28] S. Russell e P. Norvig, “Artificial intelligence: a modern approach”, 2016.

[29] H. Simon, Redes Neurais: Principios e pratica, 2º ed. São Paulo: Bookman, 1999.

[30] cs231, “Cartoon drawing of a biological neuron”. 2021, [Online]. Disponível em: https://cs231n.github.io/neural-networks-1/.

[31] T. Rateke, “Tecnicas Subsimbolicas: Redes Neurais”. LAPIX (Image Processing; Computer Graphics Lab) Universidade Federal de Santa Catarina, Florianopolis, [Online]. Disponível em: http://www.lapix.ufsc.br/ensino/reconhecimento-de-padroes/tecnicas-sub-simbolicas-redes-neurais/.

[32] A. de P. BRAGA, A. Carvalho, e T. B. Ludemir, “Fundamentos de redes neurais artificiais”, Rio de Janeiro: 11a Escola de Computação, 1998.

[33] M. A. Nielsen, Neural Networks and Deep Learning. Determination Press, 2015.

[34] J. A. Herts, A. S. Krogh, e R. G. Palmer, Introduction to the theory of neural computation. New York: CRC Press, 2018.

[35] A. Geron, Hands-on machine learning with Scikit-Learn, Keras, and TensorFlow: Concepts, tools, and techniques to build intelligent systems. O’Reilly Media, 2019.

[36] R. D. Adrian, “Deep Learning for Computer Vision with Python”. PyImageSearch, 2017.

[37] V. Dumoulin e F. Visin, “A guide to convolution arithmetic for deep learning”, arXiv preprint arXiv:1603.07285, 2016.

[38] M. Elgendy, Deep Learning for Vision Systems. Manning Publications, 2020.

[39] A. Zhang, Z. C. Lipton, M. Li, e A. J. Smola, Dive into Deep Learning. 2020.

[40] J. Johnson, “Deep Learning for Computer Vision - Fall 2019”. Website do curso EECS 498-007 / 598-005 na Universidade de Michigan, Michigan, Estados Unidos, 2019, [Online]. Disponível em: https://web.eecs.umich.edu/~justincj/teaching/eecs498/FA2020/.

[41] P. U. Stanford Vision Lab Stanford University, “ImageNet Large Scale Visual Recognition Challenge (ILSVRC)”. Website da competição ImageNet Large Scale Visual Recognition Challenge (ILSVRC), 2020, [Online]. Disponível em: https://image-net.org/challenges/LSVRC/.

[42] A. Krizhevsky, I. Sutskever, e G. E. Hinton, “Imagenet classification with deep convolutional neural networks”, Advances in neural information processing systems, vol. 25, p. 1097–1105, 2012.

[43] A. Canziani, A. Paszke, e E. Culurciello, “An analysis of deep neural network models for practical applications”, arXiv preprint arXiv:1605.07678, 2016.

[44] S. P. K. Karri, D. Chakraborty, e J. Chatterjee, “Transfer learning based classification of optical coherence tomography images with diabetic macular edema and dry age-related macular degeneration”, Biomed. Opt. Express, vol. 8, nº 2, p. 579–592, fev. 2017, doi: 10.1364/BOE.8.000579.

[45] K. He, X. Zhang, S. Ren, e J. Sun, “Deep residual learning for image recognition”, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, p. 770–778.

[46] G. Developers, “Guia para iniciantes da Cloud TPU”. Website da Google Cloud, 2021, [Online]. Disponível em: https://cloud.google.com/tpu/docs/beginners-guide?hl=pt-br.

[47] A. von Wangenheim, “Deep Learning: Introdução ao Novo Coneccionismo”. LAPIX (Image Processing; Computer Graphics Lab) Universidade Federal de Santa Catarina, Florianopolis, 2018, [Online]. Disponível em: http://www.lapix.ufsc.br/ensino/reconhecimento-de-padroes/tecnicas-sub-simbolicas-redes-neurais/.

[48] G. Developers, “Versões da API TensorFlow”. Website do TensorFlow, 2020, [Online]. Disponível em: https://www.tensorflow.org/versions.

[49] J. M. Walker, Artificial Neural Networks, 3º ed. Nova Iorque, Estados Unidos: Humana/Springer Nature, 2021.

[50] S. Chopra, R. Hadsell, e Y. LeCun, “Learning a similarity metric discriminatively, with application to face verification”, in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR05), 2005, vol. 1, p. 539–546.

[51] Y. Taigman, M. Yang, M. A. Ranzato, e L. Wolf, “Deepface: Closing the gap to human-level performance in face verification”, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, p. 1701–1708.

[52] M. Sabri e T. Kurita, “Facial expression intensity estimation using Siamese and triplet networks”, Neurocomputing, vol. 313, p. 143–154, 2018.

[53] H. Lamba, “One Shot Learning with Siamese Networks using Keras”. Website Towards Data Science, Florian0polis, 2019, [Online]. Disponível em: https://towardsdatascience.com/one-shot-learning-with-siamese-networks-using-keras-17f34e75bb3d.

[54] H. Zhang, W. Ni, W. Yan, J. Wu, H. Bian, e D. Xiang, “Visual tracking using Siamese convolutional neural network with region proposal and domain specific updating”, Neurocomputing, vol. 275, p. 2645–2655, 2018.

[55] C. Zhang, W. Liu, H. Ma, e H. Fu, “Siamese neural network based gait recognition for human identification”, in 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2016, p. 2832–2836.

[56] X. Liu, W. Liu, T. Mei, e H. Ma, “Provid: Progressive and multimodal vehicle reidentification for large-scale urban surveillance”, IEEE Transactions on Multimedia, vol. 20, nº 3, p. 645–658, 2017.

[57] Q. Wang, J. Gao, e Y. Yuan, “Embedding structured contour and location prior in siamesed fully convolutional networks for road detection”, IEEE Transactions on Intelligent Transportation Systems, vol. 19, nº 1, p. 230–241, 2017.

[58] J.-A. Gonzalez, E. Segarra, F. Garcia-Granada, E. Sanchis, e L.-F. Hurtado, “Siamese hierarchical attention networks for extractive summarization”, Journal of Intelligent & Fuzzy Systems, vol. 36, nº 5, p. 4599–4607, 2019.

[59] A. Das, H. Yenala, M. Chinnakotla, e M. Shrivastava, “Together we stand: Siamese networks for similar question retrieval”, in Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2016, p. 378–387.

[60] J. Bromley, I. Guyon, Y. LeCun, E. Säckinger, e R. Shah, “Signature verification using a" siamese" time delay neural network”, Advances in neural information processing systems, vol. 6, p. 737–744, 1993.

[61] R. Gomez, “Understanding Ranking Loss, Contrastive Loss, Margin Loss, Triplet Loss, Hinge Loss and all those confusing names”. Blog Raul Gomez, 2019, [Online]. Disponível em: https://gombru.github.io/2019/04/03/ranking_loss/.

[62] R. Hadsell, S. Chopra, e Y. LeCun, “Dimensionality reduction by learning an invariant mapping”, in 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR06), 2006, vol. 2, p. 1735–1742.

[63] F. Schroff, D. Kalenichenko, e J. Philbin, “Facenet: A unified embedding for face recognition and clustering”, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, p. 815–823.

[64] E. R. Davies, Computer Vision Principles, Algorithms, Applications, Learning. Academic press, 2017.

[65] P. Corke, Robotics, Vision and Control Fundamental Algorithms in MATLAB, 2º ed. Gewerbestrasse, Suiça: Springer, 2017.

[66] W. Commons, “Railroad tracks "vanishing" into the distance”. 2006, [Online]. Disponível em: https://commons.wikimedia.org/wiki/File:Railroad-Tracks-Perspective.jpg.

[67] R. Szeliski, Computer vision: algorithms and applications. Springer Science & Business Media, 2010.

[68] R. Hartley e A. Zisserman, “Multiple view geometry in computer”, Vision, 2nd ed., New York: Cambridge, 2003.

[69] P. C. Carvalho et al., “Fotografia 3D – 25 Coloquio Brasileiro de Matematica”, Rio de Janeiro: Associação Instituto de Matematica Pura e Aplicada, IMPA, 2005.

[70] E. R. Davies, Computer and Machine Vision: Theory, Algorithms, Practicalities, 4º ed. Waltham, Estados Unidos: Elsevier, 2012.

[71] M. C. dos Santos, “Revisão de Conceitos em Projeção, Homografia, Calibração de Câmera, Geometria Epipolar, Mapas de Profundidade e Varredura de Planos”. Disciplina Visão Computacional, Unicamp, sob orientação do Prof. Dr. Helio Pedrini., São Paulo, 2012, [Online]. Disponível em: https://www.ic.unicamp.br/~rocha/teaching/2012s1/mc949/aulas/additional-material-revision-of-concepts-homography-and-related-topics.pdf.

[72] “Camera Calibration”. Website do OpenCV, [Online]. Disponível em: https://docs.opencv.org/master/dc/dbb/tutorial_py_calibration.html.