Jump to content

Nicholas Carlini

From Wikipedia, the free encyclopedia

Nicholas Carlini is a researcher affiliated with Google DeepMind who has published research in the field of computer security and machine learning. He is particularly known for his work on adversarial machine learning.

Education[edit]

Nicholas Carlini obtained his Bachelor of Arts in Computer Science and Mathematics from the University of California, Berkeley in 2013.[1] He then continued his studies at the same university, where he pursued a PhD under the supervision of David Wagner, completing it in 2018.[1][2][3]

Research[edit]

Nicholas Carlini is known for his work on adversarial machine learning. In 2016, he working alongside David Wagner to develop the Carlini & Wagner attack, a method of generating adversarial examples against machine learning models. The attack was proved to be useful against defensive distillation, a popular defense mechanism where a student model is trained based on the features of a parent model to increase the robustness and generalizability of student models. The attack gained popularity when it was shown that the methodology was also effective against most other defences, rendering them ineffective.[4][5] In 2018, Carlini, demonstrated a attack against Mozilla Foundation's DeepSpeech model where he showed that by hiding malicious commands inside normal speech input the speech model would respond to the hidden commands even when the commands were not discernible by humans.[6][7] In the same year, Carlini and his team at UC Berkeley showed that out of the 11 papers presenting defenses to adversarial attacks accepted in the year's IEEE Symposium of Security and Privacy, seven of the defenses could be broken.[8] More recently, he and his team have worked on large-language model creating a questionnaire where humans typically scored 35% whereas AI models scored in the 40 percents, with GPT-3 getting 38% which could be improved to 40% through few shot prompting. The best performer in the test was UnifiedQA a model developed by Google specifically for answer question and answer sets.[9]

References[edit]

  1. ^ a b "Nicholas Carlini". nicholas.carlini.com. Retrieved 2024-06-04.
  2. ^ "Nicholas Carlini". AI for Good. Retrieved 2024-06-04.
  3. ^ "Graduates". people.eecs.berkeley.edu. Retrieved 2024-06-04.
  4. ^ Pujari, Medha; Cherukuri, Bhanu Prakash; Javaid, Ahmad Y; Sun, Weiqing (2022-07-27). "An Approach to Improve the Robustness of Machine Learning based Intrusion Detection System Models Against the Carlini-Wagner Attack". 2022 IEEE International Conference on Cyber Security and Resilience (CSR). IEEE. pp. 62–67. doi:10.1109/CSR54599.2022.9850306. ISBN 978-1-6654-9952-1.
  5. ^ Schwab, Katharine (12 December 2017). "How To Fool A Neural Network". Retrieved 4 June 2023.
  6. ^ Smith, Craig S. (2018-05-10). "Alexa and Siri Can Hear This Hidden Command. You Can't". The New York Times. ISSN 0362-4331. Retrieved 2024-06-04.
  7. ^ "As voice assistants go mainstream, researchers warn of vulnerabilities". CNET. Retrieved 2024-06-04.
  8. ^ Simonite, Tom. "AI Has a Hallucination Problem That's Proving Tough to Fix". Wired. ISSN 1059-1028. Retrieved 2024-06-04.
  9. ^ Hutson, Matthew (2021-03-03). "Robo-writers: the rise and risks of language-generating AI". Nature. 591 (7848): 22–25. Bibcode:2021Natur.591...22H. doi:10.1038/d41586-021-00530-0. PMID 33658699.