Criticism and comment Deep learning




1 criticism , comment

1.1 theory
1.2 errors
1.3 cyberthreat





criticism , comment

deep learning has attracted both criticism , comment, in cases outside field of computer science.


theory

a main criticism concerns lack of theory surrounding methods. learning in common deep architectures implemented using well-understood gradient descent. however, theory surrounding other algorithms, such contrastive divergence less clear. (e.g., converge? if so, how fast? approximating?) deep learning methods looked @ black box, confirmations done empirically, rather theoretically.


others point out deep learning should looked @ step towards realizing strong ai, not all-encompassing solution. despite power of deep learning methods, still lack of functionality needed realizing goal entirely. research psychologist gary marcus noted:



realistically, deep learning part of larger challenge of building intelligent machines. such techniques lack ways of representing causal relationships (...) have no obvious ways of performing logical inferences, , still long way integrating abstract knowledge, such information objects are, for, , how typically used. powerful a.i. systems, watson (...) use techniques deep learning 1 element in complicated ensemble of techniques, ranging statistical technique of bayesian inference deductive reasoning.



as alternative emphasis on limits of deep learning, 1 author speculated might possible train machine vision stack perform sophisticated task of discriminating between old master , amateur figure drawings, , hypothesized such sensitivity might represent rudiments of non-trivial machine empathy. same author proposed in line anthropology, identifies concern aesthetics key element of behavioral modernity.


in further reference idea artistic sensitivity might inhere within relatively low levels of cognitive hierarchy, published series of graphic representations of internal states of deep (20-30 layers) neural networks attempting discern within random data images on trained demonstrate visual appeal: original research notice received on 1,000 comments, , subject of time accessed article on guardian s web site.


errors

some deep learning architectures display problematic behaviors, such confidently classifying unrecognizable images belonging familiar category of ordinary images , misclassifying minuscule perturbations of correctly classified images. goertzel hypothesized these behaviors due limitations in internal representations , these limitations inhibit integration heterogeneous multi-component agi architectures. these issues may possibly addressed deep learning architectures internally form states homologous image-grammar decompositions of observed entities , events. learning grammar (visual or linguistic) training data equivalent restricting system commonsense reasoning operates on concepts in terms of grammatical production rules , basic goal of both human language acquisition , ai.


cyberthreat

as deep learning moves lab world, artificial neural networks have been shown vulnerable hacks , deception. identifying patterns these systems use function, attackers can modify inputs anns in such way ann finds match human observers not recognize. example, attacker can make subtle changes image such ann finds match though image looks human nothing search target. such manipulation termed “adversarial attack.” in 2016 researchers used 1 ann doctor images in trial , error fashion, identify s focal points , thereby generate images deceived it. modified images looked no different human eyes. group showed printouts of doctored images photographed tricked image classification system. 1 defense reverse image search, in possible fake image submitted site such tineye can find other instances of it. refinement search using parts of image, identify images piece may have been taken.


another group showed psychedelic spectacles fool facial recognition system thinking ordinary people celebrities, potentially allowing 1 person impersonate another. in 2017 researchers added stickers stop signs , caused ann misclassify them.


anns can further trained detect attempts @ deception, potentially leading attackers , defenders arms race similar kind defines malware defense industry. anns have been trained defeat ann-based anti-malware software repeatedly attacking defense malware continually altered genetic algorithm until tricked anti-malware while retaining ability damage target.


another group demonstrated sounds make google voice command system open particular web address download malware.


in “data poisoning”, false data continually smuggled machine learning system’s training set prevent achieving mastery.








Comments

Popular posts from this blog

Independence United Arab Emirates

History Alexandra College

Management School of Computer Science, University of Manchester