Security Holes in the Machine Learning Pipeline
Details
Python is the second most popular language and is very easy to start using. It is almost a de-facto standard in machine learning. Here be dragons!
Many machine learning (ML) models are Python pickle files under the hood. The use of pickling conserves memory, enables start-and-stop model training, and makes trained models portable (and, thereby, shareable).
This webinar discusses the underhanded antics that can occur simply from loading an untrusted pickle file or ML model. We also describe a new tool, Fickling, that can help you reverse engineer, test, and even create malicious pickle files. An ML practitioner will learn about the security risks inherent in standard ML practices. A security engineer will learn about a new tool to help you construct and forensically examine pickle files. Either way, by the end of this webinar, pickling will hopefully leave a sour taste in your mouth.
When
Sep 17 Friday 2021, 10 AM PST/12 PM CST/1 PM ET
Presenter
Mark Kerzner is an experienced, hands-on software architect, practicing and teaching AI, Machine Learning, Blockchain, Spark, Hadoop, NoSQL, and more. He worked in a variety of verticals (Hightech, Healthcare, O&G, Legal, Fintech). His classes are hands-on and draw heavily on his industry experience. Mark is certified in Google Cloud (GCP), Amazon (AWS), and Hadoop. He is also an author and maintainer for a popular open-source project for lawyers and researchers, FreeEed, which deals with search and massive scalability
Webinar Recording