BSidesLV 2017 has ended

Sign up or log in to bookmark your favorites and sync them to your phone or calendar.

Ground Truth [clear filter]
Tuesday, July 25

11:30 PDT

Hidden Hot Battle Lessons of Cold War: All Learning Models Have Flaws, Some Have Casualties
In a pursuit of realistic expectations for learning models can we better prepare for adversarial environments by examining failures in the field? All models have flaws, given any usual menu of problems with learning; it is the rapidly increasing risk of a catastrophic-level failure that is making data /robustness/ a far more immediate concern. This talk pulls forward surprising and obscured learning errors during the Cold War to give context to modern machine learning successes and how things quickly may fall apart in evolving domains with cyber conflict.

avatar for Davi Ottenheimer

Davi Ottenheimer

product security, mongoDB
flyingpenguins, Cyberwar History, Threat Intel, Hunt, Active Defense, Cyber Letters of Marque, Cloudy Virtualization Container Security, Adversarial Machine Learning, Data Integrity and Ethics in Machine Learning (Formerly Known as Realities of Securing Big Data).

Tuesday July 25, 2017 11:30 - 12:00 PDT
Ground Truth (Firenze) 255 E Flamingo Rd, Las Vegas, NV 89169

12:00 PDT

Deep Learning Neural Networks – Our Fun Attempt At Building One
There’s a lot of talk about the benefits of deep learning (neural networks) and how it’s the new electricity that will power us into the future. Medical diagnosis, computer vision and speech recognition are all examples of use-cases where neural networks are being applied. This begs the question, what do neural-net applications for cyber security use-cases look like? Specifically how does the process work when applying neural-nets to detect malicious URLs?  Follow along as we go from no machine learning knowledge to neural net.  Along the way you’ll learn what it took to classify URLs as malicious or benign as well as lessons learned directly from our practical attempt at this challenge.  Come find out if we had mad success or abject failure; a fun time either way!

avatar for Ladi Adefala

Ladi Adefala

Sr. Security Strategist - FortiGuard Labs, Fortinet
Ladi Adefala has served in a variety of strategic technical and leadership roles focused on advanced cyber security. As a FortiGuard Labs cyber security expert with Fortinet, he's engaged in cyber threat intelligence and research efforts. His research interests include cyber threat... Read More →

Tuesday July 25, 2017 12:00 - 12:30 PDT
Ground Truth (Firenze) 255 E Flamingo Rd, Las Vegas, NV 89169

14:00 PDT

Your model isn't that special: zero to malware model in Not Much Code and where the real work lies
Deep learning has become pervasive in a plethora of consumer applications. And there are good reasons why all the kids are doing it these days. (1) True end-to-end deep learning ameliorates, in many applications, the need to laboriously hand-craft features for ingest by a model. (2) A robust menagerie of flexible deep learning APIs (tensorflow, theano, keras, caffe, torch, mxnet, cntk, …) have made exotic deep learning architectures and ideas extremely accessible. (3) Especially in domains of object classification, machine translation, and speech recognition, deep learning solutions dominate the leaderboards, advancing state of the art performance year over year. What does this all mean? Lazy people can achieve state-of-the-art performance with very little work and a few lines of code, and don’t really have to speak math or machine learning, or really even have any domain expertise.

But what about for information security? In this talk, I’ll walk through steps to create a deep learning malware model from scratch: data curation, sample labeling, architecture specification, model training and model validation. I’ll review bleeding-edge concepts in deep learning that have disrupted other domains and show how they can be applied (sometimes poorly!) to the hardest parts of building a malware classification model. Finally, I’ll highlight what separates the easy-to-code models from product-worthy performance, and try to justify why I should still be employed as a data scientist after having demonstrated how easy this all is. Hint: the reasons have less to do with your model, and more to do with your data.

avatar for Hyrum Anderson

Hyrum Anderson

Technical Director of Data Science, Endgame, Inc.
Hyrum Anderson is the teachnical director of data scientist at Endgame. Prior to joining Endgame he worked as a data scientist at FireEye Labs, Mandiant, Sandia National Laboratories and MIT Lincoln Laboratory. He received his PhD in Electrical Engineering (signal processing + machine... Read More →

Tuesday July 25, 2017 14:00 - 14:25 PDT
Ground Truth (Firenze) 255 E Flamingo Rd, Las Vegas, NV 89169

14:30 PDT

Getting insight out of and back into deep neural networks
Deep learning has emerged as a powerful tool for classifying malicious software artifacts, however the generic black-box nature of these classifiers makes it difficult to evaluate their results, diagnose model failures, or effectively incorporate existing knowledge into them.  In particular, a single numerical output – either a binary label or a ‘maliciousness’ score – for some artifact doesn’t offer any insight as to what might be malicious about that artifact, or offer any starting point for further analysis.  This is particularly important when examining such artifacts as malicious HTML pages, which often have small portions of malicious content distributed among much larger amounts of completely benign content. 

In this applied talk, we present the LIME method developed by Ribeiro, Singh, and Guestrin, and show – with numerous demonstrations – how it can be adapted from the relatively straightforward domain of “explaining” text or image classifications to the much harder problem of supporting analysts in performing forensic analysis of malicious HTML documents.  In particular, we can not only identify features of the document that are critical to performance of the model (as in the original work), but also use this approach to identify key components of the document that the model “thinks” are likely to contain malicious elements.  This allows analysts to quickly assess both the validity of the model’s conclusion and rapidly identify regions that require additional inspection and evaluation.  In doing so the deep learning model is converted from a gnomic “black box” into a useful exploratory tool for malicious artifacts, even when the deep learning model itself may label the sample incorrectly. 

We complement this work by showing how knowledge extracted by this method – as well as existing expert knowledge – can be readily re-incorporated into deep learning models.


Richard Harang

Principal Data Scientist, Sophos
Richard Harang is a Principal Data Scientist at Sophos with over seven years of research experience at the intersection of computer security, machine learning, and privacy. Prior to joining Sophos, he served as a scientist at the U.S. Army Research Laboratory, where he led the research... Read More →

Tuesday July 25, 2017 14:30 - 15:25 PDT
Ground Truth (Firenze) 255 E Flamingo Rd, Las Vegas, NV 89169

15:30 PDT

Transfer Learning: Analyst-Sourcing Behavioral Classification
Information Security (InfoSec) operations analysts are deluged with data, and that is with not even reviewing a significant portion of an organization’s logged data - and certainly not in anything close to real-time. Additionally, too many of the alerts generated by log reviews (e.g., by a SIEM) are false positives - an unnecessary distraction for analysts, and a contribution to the embarrassing number of false negatives. With log volumes growing significantly year over year, a radical change in approach is needed.

Enter AI. Not just machine learning, but AI; specifically, active learning. In this presentation, we will discuss how to augment a critical shortage of trained analyst personnel with active learning, institutionalize their knowledge of benign traffic and attacks, and how to share that knowledge between organizations.

avatar for Ignacio Arnaldo

Ignacio Arnaldo

Chief Data Scientist, Patternex
I am working at PatternEx, a Bay Area startup developing an artificial intelligence platform for InfoSec. The platform leverages state-of-the-art machine learning and artificial intelligence algorithms for real-time attack prevention in enterprise applications.
avatar for Tim Mather

Tim Mather

Chief Security Strategist, PatternEx
Long-time information security practitioner, single parent of three (all cats - rescues).

Tuesday July 25, 2017 15:30 - 16:00 PDT
Ground Truth (Firenze) 255 E Flamingo Rd, Las Vegas, NV 89169

17:00 PDT

The Human Factor: Why Are We So Bad at Security and Risk Assessment?
How does the science of human perception and decision making influence the security sector? How can we use information about how people make decisions to create more successful security professionals? In the 1970s, “fringe” psychologists began to question the phenomenon of decision making, seeking to understand the mechanism by which individuals will make seemingly unfathomable choices in the face of obvious deterrents. When one has any personal stake in a situation (e.g. what to eat for dinner or who to vote for) our ability to take stock and react reasonably becomes nearly non-existent.

There are numerous academic studies on decision-making and perception whose insights have been applied to various industries over the years with surprising success. Why do we make unintelligent choices? Why are we are so overwhelmingly deficient at risk assessment? This session will explore how the science of decision making applies to the security sector, empowering attendees to walk away with a better understanding of how these concepts can be leveraged to build more robust and useful security tools, as well as more successful training models. Supported by the research of Nobel prize-winning psychologist Daniel Kahneman, I will introduce these techniques and discuss how they can help security in several practical ways.

avatar for John Nye

John Nye

VP, Cybersecurity Strategy, CynergisTek, Inc.
John Nye is Vice President of Cybersecurity Strategy for CynergisTek and has spent the majority of the last decade working in Information Security, half that time working exclusively as a professional penetration tester. Besides testing and improving security, John has a passion for... Read More →

Tuesday July 25, 2017 17:00 - 17:55 PDT
Ground Truth (Firenze) 255 E Flamingo Rd, Las Vegas, NV 89169

18:00 PDT

Behavioral Analysis from DNS and Network Traffic
Using behavioral analysis, it's possible to observe and create a baseline of average behavior on a network, enabling intelligent notification of anomalous activity. This talk will demonstrate methods of performing this activity in multiple environments. Attendees will learn new methods which they can apply to further monitor and secure their networks.

avatar for Josh Pyorre

Josh Pyorre

Security Researcher, Cisco Umbrella
I've been in security for about 20 years, starting as a field service engineer, moving on to sysadmin and running my own consulting company. I then worked at NASA as their first analyst for their new SOC. After a few years, I went to work for Mandiant to help them build their SOC... Read More →

Tuesday July 25, 2017 18:00 - 18:55 PDT
Ground Truth (Firenze) 255 E Flamingo Rd, Las Vegas, NV 89169

19:00 PDT

Sympathy for the Developer
In the realm of software security, developers are without question a major focus of blame, setting security teams to be in conflict with engineering. In general, the unwritten rule is that developers who make security mistakes either don't know, or don't care to know the "right" way to do things. What if this was framed differently? This talk is to present evidence that software security flaws occur at a fairly steady rate independent of which team or organization is developing the code.

In other words, everyone poops. This talk aims to present evidence based on previous reports, and new research, to show that bugs happen and the rate that they are being introduced hasn’t noticeably gone down during the past five years. Focusing specifically on how often SQL injection weaknesses are found in new applications using Veracode’s static scanning engine. Security flaws are going to occur, I propose the area for improvement is in finding them early and assisting developers with fixing them.

avatar for Sarah Gibson

Sarah Gibson

Application Security Consultant, Veracode
Nerdy about web application security. Currently talks to developers about how to make their applications more secure.

Tuesday July 25, 2017 19:00 - 19:30 PDT
Ground Truth (Firenze) 255 E Flamingo Rd, Las Vegas, NV 89169
Wednesday, July 26

10:00 PDT

The New Cat and Mouse Game: Attacking and Defending Machine Learning Based Software
Machine learning is increasingly woven into software that determines what objects our cars recognize as obstacles, whether or not we have cancer, what news articles we should read, and whether or not we should have access to a building or device. Thus far, the technology community has focused on the benefits of machine learning rather than the security risks. And while the security community has raised concerns about machine learning, most security professionals aren't also machine learning experts, and thus can miss ways in which machine learning systems can be manipulated. My talk will help to close this gap, providing an overview of the kinds of attacks that are possible against machine learning systems, an overview of state-of-the-art methods for making machine learning systems more robust, and a live demonstration of the ways one can attack (and defend) a state-of-the-start machine learning based intrusion detection system.

avatar for Joshua Saxe

Joshua Saxe

Chief Data Scientist, Sophos
Joshua Saxe is Chief Data Scientist at Sophos, where he and his team focus on developing breakthrough security data science technologies. Highlights of his work have included leading research to develop neural networks for detecting malicious PE, URL and HTML content, developing a... Read More →

Wednesday July 26, 2017 10:00 - 10:25 PDT
Ground Truth (Firenze) 255 E Flamingo Rd, Las Vegas, NV 89169

10:30 PDT

Mining Software Vulns in SCCM / NIST’s NVD– The Rocky Road to Data Nirvana
Patch management for 3rd-party software can be a significant challenge. The raw data for effective vulnerability management is available in MS’ SCCM (software inventory) and NIST’s NVD (vulnerability database). However extracting the relevant information from complex, sometimes undocumented data structures poses significant challenges.

We set the stage first with a brief overview of SCCM / NVD data structures as well as a look at a (non-typical but interesting!) production environment. Then we’ll take a quick dive into data wrangling / Machine Learning fundamentals applied to this problem: feature extraction, choice of approach, algorithm choice and turning.

Once the technical challenges are resolved, the path to “Data Nirvana” can still be strewn with significant non-technical hurdles to overcome as well. We will discuss some practical “been there, done that” examples. Following a “Lessons Learned” summary, there will be a demo of the tool.

avatar for Loren Gordon

Loren Gordon

Security Architect, Ubisoft
With over 25 years’ experience, Loren has done extensive stints at 2 large financial institutions, a major retailer, a world-class telco, a service bureau or two, and now Ubisoft (the greatest gaming company ever!). Loren has worked on everything from mobile phones, laptops and... Read More →

Wednesday July 26, 2017 10:30 - 10:55 PDT
Ground Truth (Firenze) 255 E Flamingo Rd, Las Vegas, NV 89169

11:00 PDT

Building a Benign Data Set
Though featurization is important, the datasets used to make conclusions are just as important, if not more so. Information Security researchers often cannot release data, resulting in lack of benchmark datasets and causing cross-dataset generalization to be understudied in this domain. Despite this fact, presence of dataset bias (especially negative set bias) is now common knowledge in machine learning for malware classification. For these reasons, we have developed a standard for benign datasets to be used toward machine learning in the malware classification domain. We are also releasing a sample benign data set designed to minimize these problems.

avatar for Rob Brandon

Rob Brandon

Security Researcher, Booz-Allen-Hamilton
Rob is currently a security researcher with the Booz-Allen Hamilton Dark Labs. He has over a decade of experience in the security field, primarily in the areas of network traffic analysis, forensics, reverse engineering, and machine learning. Rob holds a PhD in Computer Science... Read More →
avatar for John Seymour

John Seymour

University of Maryland, Baltimore County
John Seymour is a Senior Data Scientist at ZeroFOX, Inc. by day, and Ph.D. student at University of Maryland, Baltimore County by night. He researches the intersection of machine learning and InfoSec in both roles. He’s mostly interested in dataset bias (seriously, do people still... Read More →

Wednesday July 26, 2017 11:00 - 11:25 PDT
Ground Truth (Firenze) 255 E Flamingo Rd, Las Vegas, NV 89169

11:30 PDT

A System Dynamics Approach to CNO Modelling
This paper is based in the field of System Dynamics (SD) Modelling. Recent research of Advanced Persistent Threats (APTs) has focused on development of tools, tactics, and procedures (TTP). However, developing an understanding of the managing bodies and bureaucracies that drive these actors and their computer network operations (CNOs) is just as significant as understanding their TTP. This paper proposes a model that focusses on how the APTs allocate and utilize their resources. The assumption is that in this allocation there is an optimal way to operate to either attack or defend infrastructure. This model strives to explain the optimal resource allocation of APTs and targets based on the feedback loops present in SD.

avatar for Sara Mitchell

Sara Mitchell

Recent Masters Graduate, Carnegie Mellon University
Recent graduate of the Information Security Policy and Management program at the Heinz College at Carnegie Mellon University. Studies and research experiences focused on threat intelligence and modelling.

Wednesday July 26, 2017 11:30 - 11:55 PDT
Ground Truth (Firenze) 255 E Flamingo Rd, Las Vegas, NV 89169

12:00 PDT

The Role of Data Visualization in Improving Machine Learning Models
Improving a machine learning model is impossible without a clear understanding of its current performance. In order to get that understanding, the Endgame data science team build Bit Inspector. Bit Inspector is an internal data visualization tool that Endgame uses to communicate the proficiency of our binary classification product, MalwareScore, through various data visualizations. Bit Inspector includes plots and metrics used to judge the ultimate performance of a model overall and across many sample subclasses. It also displays details about individual samples that can provide context about misclassifications. Bit Inspector has grown to include model performance summaries and real time performance tracking, and has proven valuable not just for data scientists, but also for project and product managers and executives to better understand the efficacy of MalwareScore. By tracking the right metrics through data visualizations, a data science team can stay focused on improving the model and communicating that improvement to stakeholders.

avatar for Phil Roth

Phil Roth

Data Scientist, Endgame
Dr. Phil Roth is a senior data scientist at Endgame, where he develops products that help security analysts find and respond to threats. This work has ranged from tuning a machine learning algorithm to best identify malware to building a data exploration platform for HTTP request... Read More →

Wednesday July 26, 2017 12:00 - 12:30 PDT
Ground Truth (Firenze) 255 E Flamingo Rd, Las Vegas, NV 89169

14:00 PDT

Data visualization in security: Still home of the WOPR?
Visualization of security data has not advanced significantly since the days of the WOPR in War Games. Other tech industries have embraced the role of modern user interfaces to facilitate and expedite data search, analysis and discovery, which has significantly helped users in those industries gain insights from a big data environment. In contrast, the security industry prefers to relegate everyone into command line prompts and clunky interfaces with minimal functionality and an inability to scale to the volume, velocity, and variety of security data. I’ll address the core challenges and impact of the industry’s failure to take data visualization and user experience seriously, and provide recommendations on key areas that would most benefit from modern data visualization. Through the use of attack timelines, I’ll demonstrate how we, as an industry, must move beyond familiar visualization conventions (that tend to break at scale) and provide functional data visualization that is usable for analysts and operators across all levels of expertise.

avatar for Matthew Park

Matthew Park

UX Lead, Endgame
Matthew Park is the UX Lead at Endgame. He directs the company in implementing thoughtful and practical workflows, visualizations, and experiences into our platform. Matt and his team translate user requirements into intuitively functional interfaces. Matthew’s prior background... Read More →

Wednesday July 26, 2017 14:00 - 14:25 PDT
Ground Truth (Firenze) 255 E Flamingo Rd, Las Vegas, NV 89169

14:30 PDT

Exploration of Novel Visualizations for Information Security Data
Effective visualizations for information security data are challenging. Given the streaming nature of network data and the mix of numeric and categorical types (e.g. DNS records) visualizations that are meaningful and informative are often hard to find. Even highly successful application interfaces like Kibana and Splunk will often provide a simple set of volume-over-time histograms, pie/donut charts and line plots. Although these visualizations provide some information they are limited in application and fidelity.

In this presentation we’ll explore several novel visualization approaches for information security data. Our non-traditional approaches will explore dynamic updates, mixed categorical/numeric representations, animations and other experimental facets. We intend to present our findings ‘warts’ and all. The presentation will include approaches that worked reasonably well and those that flopped (which is often just as informative).

avatar for Brian Wylie

Brian Wylie

Kitware Inc
Brian Wylie is a technical lead at Kitware Inc. His interests include networking, static analysis, and streaming architectures. Recent work includes modeling for SQL injection, hidden DNS and HTTP tunnels, streaming clustering and anomaly detection. Brian has spoken at ShmooCon, BSides... Read More →

Wednesday July 26, 2017 14:30 - 14:55 PDT
Ground Truth (Firenze) 255 E Flamingo Rd, Las Vegas, NV 89169

15:00 PDT

Magical Thinking... and how to thwart it.
For all the progress we’ve made – as a community, as an industry, as a discipline – describing the brittleness of our IT infrastructure and 'the shape of the beast’ (what is this hacking stuff anyway?), we’re not seeing much in the way of obvious returns in two key areas : procurement and policy.

We know what's broken; we even mostly know how to fix it. We fight the good fight from the C-suite to Capitol Hill. Yet often we lose. Why?

Behind nearly every poor choice in procurement or policy is some species of magical thinking. Not idiocy, not ignorance, not malice, but a logical error in determining causality. These are not complicated fallacies, nor particularly difficult to spot, but they are seductive, they are omnipresent. And, unfortunately, they are often profitable. They are also critical to our understanding of *why* broken things stay broken, and why evidence-based policies are so elusive.

Attendees will explore imagined realities informing real policy and procurement decisions; they will additionally have the opportunity to learn and share battle-tested thwarting strategies.

avatar for Mara Tam

Mara Tam

Senior Fellow, Center for Advanced Studies on Terrorism (CAST)
Mara is a Washington DC-based ICT security policy expert. Mara regularly serves as a private sector advisor to executive agencies on information security issues, focussing on the technical and strategic implications of regulatory and policy activity. Prior to her current roles, she... Read More →

Wednesday July 26, 2017 15:00 - 15:25 PDT
Ground Truth (Firenze) 255 E Flamingo Rd, Las Vegas, NV 89169

15:30 PDT

Is Data Visualization still necessary?
As researchers we all struggle and push the limits of available data visualization libraries. Availability of real time network flows have exceeded the capacity of current visualization libraries and the ability of the human to grasp densely visualized information. How much data is too much? We will explore the current state of the art in visualization as we try to answer the question of how to visualize backbone level enterprise data.

avatar for Edmond Rogers

Edmond Rogers

University of Illinois
Edmond 'bigezy' Rogers, CISSP is actively involved with industry and in many research activities at the University of Illinois Information Trust Institute (ITI)’s TCIPG and CREDC Center, including work on ICS and SCADA visualization along with Smart Grid Security. Project work on... Read More →
avatar for Grace Rogers

Grace Rogers

Front-End Designer, Kaedago
Grace Rogers is a student involved in several data analysis and visualization projects. She is currently designing the front end of CyPSA’s visualization tool for Kaedago. Additionally, Grace is working with researchers at the University of Illinois at Urbana-Champaign on a tool... Read More →

Wednesday July 26, 2017 15:30 - 16:00 PDT
Ground Truth (Firenze) 255 E Flamingo Rd, Las Vegas, NV 89169

17:00 PDT

Grappling Hooks on the Ivory Tower: This Year in Practical Academic Research
Five years before volume 1 issue 1 of Phrack, there was IEEE Security and Privacy. Where Merkle (of Merkle–Damgård; think SHA-2) showed us how to do crypto right in 1980. Where your favourite nation-state adversaries watch their secrets become public. Where Naval Postgraduate School showed off their secure kernel in 1981. Since then, professors and decidedly unprofessorial types have each, mostly separately, smashed and rebuilt security with their own separate armies of admirers, haters, and hangers-on.

We'll take you on a short trip through the parallel universe of academic infosec, and point out just the cool, practical stuff that came down from the ivory tower a few months ago. You'll see a bit of yourself reflected, how hackers shape the academic world, what academics have to say about our favourite bug-writing developers, and what is shaping TLS 1.3. We hope you'll also get inspired and do some science.

avatar for Falcon Darkstar Momot

Falcon Darkstar Momot

Senior Security Consultant, Leviathan Security Group
Falcon is a senior penetration tester at Leviathan Security Group who works on everything from cryptosystem design to security program operation. He also studies LangSec as an M. Sc. student at Athabasca University, and captures flags with Neg9. His alter ego is AF7MH, licensor... Read More →

Wednesday July 26, 2017 17:00 - 17:55 PDT
Ground Truth (Firenze) 255 E Flamingo Rd, Las Vegas, NV 89169

18:00 PDT

How to make metrics and influence people
Data science is not just a set of algorithms - it’s a discipline. There are many things we need to think about when we pull data from security tools, like vulnerability scanners, analyse it and present insights. This, however, is still only the beginning. In order for our analysis to have influence, we need to leverage this approach to create metrics that can actually drive improvement in security processes and help reduce risk.

During this process, there’ll be many painful questions to answer, like: “How do I choose what to measure?”; “Why doesn’t anyone seem engaged with theses metrics, even though they asked for them!?”; and “What do I when everyone seems to disagree on where the risk is?”

This talk will demonstrate how you can use data science to give everyone from IT Ops to the CISO a shared way of looking at a risk problem that they all buy into. We’ll review metrics that a team in a global financial are using to make strategic decisions and show how these relate directly to tactical tasks, enabling security and IT to prioritize effectively, and measure their success.

avatar for Leila Powell

Leila Powell

Security Data Scientist, Panaseer
Hi - I'm a data scientist working in security. I used to use supercomputers to study the evolution of galaxies as an astrophysicist. Now I tackle more down-to-earth challenges, helping companies use different data sets to understand and address security risk. As part of the team at... Read More →

Wednesday July 26, 2017 18:00 - 18:55 PDT
Ground Truth (Firenze) 255 E Flamingo Rd, Las Vegas, NV 89169