Transactional Streaming: If You Can Compute It, You Can Probably Stream It
This presentation will describe what is possible when integrated systems apply a transactional approach to event processing.
In the race to pair streaming systems with stateful systems, the winners will be stateful systems that process streams natively. These systems remove the burden on application developers to be distributed systems experts, and enable new applications to be both powerful and robust.
The operational systems of the future will not look like the past, with authoritative databases, caching layers and ORMs thrown in for fun :)
The distinction between traditional operational systems and event/stream processing has begun to blur. This event-stream focus loosens up processing constraints and makes scaling easier to manage. Nevertheless, these systems still need state, and they still need to take action.
The tools to build these kinds of operational systems are evolving, but immature. Systems that focus just on streaming neglect state, and vice-versa. Cobbled-together hybrid systems offer flexibility, but are complex to develop and deploy, may mask bugs, and display surprising behavior when components fail. The superficially 'simple' task of connecting systems together requires both distributed systems expertise and tremendous effort to hammer down unforeseen production issues.
This talk will explore what is possible when systems integrate event processing with state management in a consistent, transactional way. One event = one ACID transaction.
How does integration and atomic processing simplify failure management? How does this simplify building applications?
How can users leverage stronger consistency to do more complex math and calculations within event processing problems? How can we move from struggling to count, to dynamic counting and aggregation in real-time?
Many streaming systems focus on at-least-once or at-most-once delivery. Those that offer stronger guarantees are often very limited in how they interact with state. Can stronger consistency help achieve exactly-once semantics?
Finally, the latency reduction from integration can mean the difference between decisions that control how the event is processed, and reactive decisions that only affect the future. Complementary to latency, integration can also increase throughput, which can mean the difference between managing a handful of servers or a fleet.
Come and enjoy this action-packed presentation and find out how to solve all these challenges!
John Hugg has spent his entire career working with databases and information management at a number of startups including Vertica Systems and now VoltDB.
As the founding engineer at VoltDB, he was involved in all of the key early design decisions and worked collaboratively with the new VoltDB team as well as academic researchers exploring similar challenges.
In addition to his engineering role, John has become a primary evangelist for the technology, speaking at conferences worldwide and authoring blog posts on VoltDB, as well as on the state of OLTP and stream processing broadly.