5 JUN 2018 The Poetics of Swarms @ La Casa Naranja (San Francisco, CA)
7 DEC 2018 Tempered @ Center for New Music (San Francisco, CA)
11 OCT 2017 Step by Step @ Metis (San Francisco, CA) video
21 SEP 2016 Forms (w/ Carolyn Strauss) @ Conference on Complex Systems (Amsterdam, NL)
9 AUG 2016 Form and Movement in Complex Systems @ PyLadies (San Francisco, CA) video
4 JUN 2016 On People, Cubes and Snacks @ AlterConf (San Francisco, CA) - video
The poetics of swarms
- A brief history of swarm intelligence
- PSO papers
- Opportunities and bottlenecks for applying these heuristics
Swarm intelligence is the collective behavior of decentralized, self-organized systems. The whole being other than the sum of its parts. Many human traditions have described swarm intelligence, and while my research is becoming more and more rooted in finding real-world applications of swarm intelligence algorithms from the evolutionary computing tradition, my heritage as an empath and artists makes it impossible for me not to be porous to this broader heritage. This talk is rooted in my ongoing work on SwarmLab, an open source swarm intelligence algorithm library and test-kitchen, and in my broader commitment to reconnecting with traditions of collective ideation and collaboration. Together we will explore how thinking as and with the collective avails us to dynamic modes of problem solving.
STEP BY STEP: When and how to use STochastic Optimization
- Intro to stochastic optimization
- In-depth look at Stochastic Gradient Descent, Particle Swarm Optimization and Simulated Annealing
- Implementation and optimization walkthrough in Python
- Industry applications
You’ve got your model up and running, and now you’d like to tune your hyperparameters. If you’ve gotten to this point in the process, it means you’ve successfully cleaned your data, engineered your features, coded up your model, identified your objective function, and are now looking to optimize. You could cancel your evening plans to manually explore all hyperparameter combinations, or programmatically iterate through them with something like grid search, but you want to try something else. Something that will produce impressive results in a fraction of the time. Enter stochastic optimization.
This presentation will build on what you already know about parameter optimization to dive deeper into the world of stochastic optimization. In particular, we’ll explore stochastic gradient descent, simulated annealing, and particle swarm optimization - cultivating intuition on how these algorithms work, when they can be applied, what their underlying topologies look like, and how you can get the best performance out of them. If possible, bring models you’re working, so you can relate the content directly to what you’re building. I’ll be available afterwards (and by e-mail) to ensure you have what you need to implement the optimizers we discuss. Code examples will be presented in Python, but I’ll also provide libraries and resources in R.