java实现kafka发送消息和接收消息(java无注解方式+springBoot注解方式)
内容导读
互联网集市收集整理的这篇技术教程文章主要介绍了java实现kafka发送消息和接收消息(java无注解方式+springBoot注解方式),小编现在分享给大家,供广大互联网技能从业者学习和参考。文章包含5381字,纯文字阅读大概需要8分钟。
内容图文
条件:搭建好kafka环境
搭建zookeeper+kafka地址:https://www.cnblogs.com/weibanggang/p/12377055.html
1、java无注解方式
加入kafka包:
< dependency > < groupId >org.apache.kafka</groupId><artifactId>kafka-clients</artifactId><version>2.0.0</version></dependency>
消费者代码
package com.wbg.springboot_kafka; import org.apache.kafka.clients.consumer.ConsumerConfig; import org.apache.kafka.clients.consumer.ConsumerRecords; import org.apache.kafka.clients.consumer.KafkaConsumer; import org.apache.kafka.common.serialization.IntegerDeserializer; import org.apache.kafka.common.serialization.StringDeserializer; import java.time.Duration; import java.util.Collections; import java.util.Properties; public class Consumer extends Thread { KafkaConsumer<Integer,String> consumer; String topic; public Consumer(String topic){ Properties properties=new Properties(); properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,"192.168.198.129:9092,192.168.198.130:9092,192.168.198.131:9092"); properties.put(ConsumerConfig.CLIENT_ID_CONFIG,"consumer"); properties.put(ConsumerConfig.GROUP_ID_CONFIG,"consumer"); properties.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG,"30000"); properties.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG,"1000"); //自动提交(批量确认) properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, IntegerDeserializer.class.getName()); properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName()); //一个新的group的消费者去消费一个topic properties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,"earliest"); //这个属性. 它能够消费昨天发布的数据 consumer=new KafkaConsumer<Integer, String>(properties); this.topic = topic; } @Override publicvoid run() { consumer.subscribe(Collections.singleton(this.topic)); while (true){ ConsumerRecords<Integer,String> consumerRecords = consumer.poll(Duration.ofSeconds(1)); consumerRecords.forEach(record ->{ System.out.println(record.key()+"->"+record.value()+"->"+record.offset()); }); } } publicstaticvoid main(String[] args) { new Consumer("test_partition").start(); } }
生产者代码
package com.wbg.springboot_kafka; import org.apache.kafka.clients.producer.KafkaProducer; import org.apache.kafka.clients.producer.ProducerConfig; import org.apache.kafka.clients.producer.ProducerRecord; import org.apache.kafka.common.serialization.IntegerSerializer; import org.apache.kafka.common.serialization.StringSerializer; import java.util.Properties; import java.util.concurrent.TimeUnit; public class Producer extends Thread { KafkaProducer<Integer, String> producer; String topic; public Producer(String topic) { Properties properties = new Properties(); properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.198.129:9092,192.168.198.130:9092,192.168.198.131:9092"); properties.put(ProducerConfig.CLIENT_ID_CONFIG, "producer"); properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, IntegerSerializer.class.getName()); properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName()); producer = new KafkaProducer<Integer, String>(properties); this.topic = topic; } @Override publicvoid run() { int num = 0; while (num < 20) { try { String msg = "kafka msg " + num; producer.send(new ProducerRecord<>(topic, 3, msg), ((recordMetadata, e) -> { System.out.println(recordMetadata.offset() + "->" + recordMetadata.partition()); })); TimeUnit.SECONDS.sleep(2); } catch (InterruptedException e) { e.printStackTrace(); } } } publicstaticvoid main(String[] args) { new Producer("test_partition").start(); } }
启动生产者
启动消费者
2、SpringBoot注解方式
pom依赖:
< dependency > < groupId >org.springframework.kafka</groupId><artifactId>spring-kafka</artifactId><version>2.2.0.RELEASE</version></dependency>
application.properties文件
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.IntegerSerializer spring.kafka.bootstrap-servers=192.168.198.129:9092,192.168.198.130:9092,192.168.198.131:9092 spring.kafka.consumer.group-id=springboot-groupid spring.kafka.consumer.auto-offset-reset=earliest spring.kafka.consumer.enable-auto-commit=true spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.IntegerDeserializer spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer
消费者代码
@Component public class KafkaMyConsumer { @KafkaListener(topics = {"test"}) publicvoid listener(ConsumerRecord record){ Optional msg = Optional.ofNullable(record.value()); if(msg.isPresent()){ System.out.println(msg.get());; } } }
生产者代码
@Component public class KafkaMyProducer { @Autowired private KafkaTemplate<Integer,String> kafkaTemplate; publicvoid send(){ kafkaTemplate.send("test",1,"msgData"); } }
启动
@SpringBootApplication public class SpringbootKafkaApplication { public static void main(String[] args) throws InterruptedException { ConfigurableApplicationContext context = SpringApplication.run(SpringbootKafkaApplication.class,args); KafkaMyProducer kafkaMyProducer = context.getBean(KafkaMyProducer.class); for (int i = 0; i < 10; i++) { kafkaMyProducer.send(); TimeUnit.SECONDS.sleep(3); } } }
原文:https://www.cnblogs.com/weibanggang/p/12416151.html
内容总结
以上是互联网集市为您收集整理的java实现kafka发送消息和接收消息(java无注解方式+springBoot注解方式)全部内容,希望文章能够帮你解决java实现kafka发送消息和接收消息(java无注解方式+springBoot注解方式)所遇到的程序开发问题。 如果觉得互联网集市技术教程内容还不错,欢迎将互联网集市网站推荐给程序员好友。
内容备注
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 gblab@vip.qq.com 举报,一经查实,本站将立刻删除。
内容手机端
扫描二维码推送至手机访问。