Guide to Kafka Summit London 2023: Sessions, Networking, and More

As we inch closer to the next big event in the world of data – the Kafka Summit London 2023, it’s time to get our gears rolling! With five parallel tracks of intriguing sessions and a plethora of professionals from diverse industries, the event promises to be power-packed. This blog post will guide you through the different sessions, festivities, community activities, and more, to ensure you make the most out of your Kafka Summit experience.

How to Choose the Ideal Sessions?

We understand how overwhelming the event’s exhaustive schedule can be for attendees. To make things easier, here are some pointers that might help streamline your thought process:

  1. Identify What Interests You – Reflect on your preferences and areas of interest related to Kafka.
  2. Sector Specific Interests? – Are there any industries or companies you’re curious about? Companies like Mercedes-Benz, FREENOW, and Michelin will be sharing their insights at the summit.
  3. Get Electrified with Lightning Talks – Keep room for the stimulating lightning sessions. If you’re one to savour crisp content packed in short time frames, these sessions are meant for you!

Need more help? Kick-start your Kafka Summit itinerary with some of the sessions that particularly caught my attention:

  • Apache Flink on Kafka: Reliable Data Pipelines Everyone Can Code – presented by Ela Demir
  • A Practical Guide to End-to-End Tracing in Event-Driven Architectures – shared by Roman Kolesnev
  • You’ve Got Mail! – led by Michael van der Haven and Chris Egerton
  • Exactly-Once, Again: Adding EOS Support for Kafka Connect Source Connectors

Please refer to our full agenda for more details.

Beyond the Sessions

Summit experience isn’t limited to just attending sessions; here are a few bonus activities to look forward to:

  1. Pac-Man Rule – Make networking smoother by constricting to our ‘Pac-Man’ rule, ensuring inclusion for everyone attending the event.
  2. Unofficial Kafka Summit 5K Fun Run – Break a sweat in our unofficial 5K run! For more details, watch out for the ‘Fun Run’ section in the agenda.
  3. Community Meetup Hub and Birds-of-a-Feather Luncheons – Share ideas, experiences and form connections at the Community Meetup Hub.
  4. Kafka Fundamentals Course – Want to learn more about Kafka? Don’t forget to sign up for the course on our registration page.
  5. Kafka Summit Party – Relax and unwind with your fellow participants after a day full of learning.

Don’t forget to share moments from your Kafka Summit experience on social media with the hashtags #KafkaSummit and #StreamingSelfie.

Ticked with excitement? Then why wait? Register for Kafka Summit London today.

Confluent Public Sector Summit

Related Blogs

Conclusion

Anticipation is running high as we gear up for the Kafka Summit London 2023. Whether you’re a seasoned Kafka expert or a curious newcomer, this event is your chance to delve deeper into the world of Kafka, form valuable connections, and most importantly, have fun! See you there!

Tags: KafkaSummit, Apache Kafka, Data Streaming, Event

Reference Link

Maximizing Real-Time Streaming with Apache Kafka Consumer Groups

Apache Kafka is an open source distributed event streaming platform, giving teams power and precision in handling real-time data. Understanding the ins and outs of Kafka and its concepts, such as consumer groups, can help organizations harness the full potential of their real-time streaming applications and services.

Understanding Kafka Consumers and Consumer Groups

Kafka consumers are typically arranged within a consumer group, comprising multiple consumers. This design allows Kafka to process messages in parallel, providing notable processing speed and efficiency.

Despite this, a lone consumer can read all messages from a topic independently, or doubly, several consumer groups are capable of reading from a single Kafka topic. The setup largely relies on your specific requirements and use case.

Distributing Messages to Kafka Consumer Groups

Kafka uses an organized system of distributing messages. Topics in Kafka include partitions for this precise purpose.

Given a consumer group with a singular consumer, it will get messages from all partitions of a topic:

Single Consumer

In the case of a consumer group with two consumers, each will receive messages from half of the topic partitions:

Two Consumers

Consumer groups make a point to balance their consumers across partitions until the 1:1 ratio is satisfied:

Balancing Consumers

However, if there are more consumers compared to partitions, any surplus consumers will not receive messages:

Surplus Consumers

Exploring Consumer Group IDs, Offsets, and Commits

Each consumer group features a unique group identifier, known as a group ID. Consumers configured with various group IDs essentially belong to different groups. And instead of an explicit method keeping track of reading messages, a Kafka consumer employs an offset – referring to each message’s position in the queue that is read.

Offsets

Users are given the choice to store these offsets by themselves, or Kafka can manage them. If Kafka handles it, the consumer will publish them to a unique internal topic named __consumer_offsets.

Consumer Dynamics in a Kafka Consumer Group

A new consumer within a Kafka consumer group will look for the most recent offset and join the action, consuming the messages that were formerly assigned to a different consumer. The same occurs if a consumer leaves the group or crashes – a remaining consumer will cover its tasks and consume from the partitions previously assigned to the absent consumer.

Overview

This effectively helpful process is called “rebalancing” and can be triggered under a variety of circumstances, providing a fluid system designed to ensure maximum efficiency.

In Conclusion

Understanding Kafka’s method of data streaming down to its internal systems, such as consumer groups, is crucial for any organizations looking to leverage its power. By utilizing Apache Kafka’s sophisticated design, they can ensure maximum efficiency in real-time streaming applications and services for their operations.

Tags: #ApacheKafka #ConsumerGroups #BigData #DataStreaming

Reference Link