亚洲色成人网站www永久,亚洲欧美人成视频一区在线,亚洲国产成人高清在线观看,亚洲精品久久久久久动漫,亚洲国产精品久久电影欧美

數(shù)據(jù)專欄

智能大數(shù)據(jù)搬運工,你想要的我們都有

科技資訊

科技學(xué)院

科技百科

科技書籍

網(wǎng)站大全

軟件大全

HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> 我在PC上安裝了VLC播放器,在IE上能正常播放,但是在火狐上面不行, 如圖: 代碼如下: 怎么回事?
來源:開源中國
發(fā)布時間:2016-11-30 14:46:00
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> kafka數(shù)據(jù)積壓,應(yīng)該怎么快速消費全部kafka數(shù)據(jù)??
來源:開源中國
發(fā)布時間:2020-04-15 11:20:00
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> Kafka 消息數(shù)據(jù)積壓, Kafka 消費能力不足怎么處理?
來源:開源中國
發(fā)布時間:2020-07-14 19:17:00
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> spirng-kafka的多consumer問題困擾了我好久,今天項目再次出現(xiàn) Attempt to heart beat failed since the group is rebalancing, try to re-join group. 這個問題,導(dǎo)致消息接收不了了,查詢了很多資料,也看了很多相關(guān)文章, 但是并沒有找到什么解決方法,也許是我搜索方式錯了? 只好上這來提問題,希望有人能幫助我解決。 先說下情況: kafka版本為 9.0.1 由于項目屬于分布式的微服務(wù)架構(gòu),有時候需要消息能到達每個同服務(wù)實例,因此需要實現(xiàn)kafka的廣播模式,基于kafka同一Topic下不同Group都會收到消息,所以在一開始在kafka屬性配置中使用了實例IP作為groupID: private Map consumerProps() { return new CustomHashMap() .put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, servers) .put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true) .put(ConsumerConfig.GROUP_ID_CONFIG, "receiveMessage"+ IpUtil.getLocalhostAddress().replace(".", "")) .put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "100") .put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "15000") .put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class) .put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class); } 使用@KafkaListener注解方式進行消息接收,其中 receiveKafkaListenerContainerFactory 是自定義bean @KafkaListener(containerFactory = "receiveKafkaListenerContainerFactory", topics = KafkaTopicName.DEVICE_MESSAGE_TOPIC) public void onMessageListener(MessageTemplate message){ log.info("===> receive [{}]", message.getMessage()); parseMessageAdapter.adapter(message.getType(), message); } 服務(wù)啟動后,發(fā)現(xiàn)kafka只接收到了第一條消息,并且這接收到這條消息后就爆出開頭所說的錯誤: INFO [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-kafka-consumer-1] m.h.b.p.listener.receiver.Receiver.onMessageListener:24 - ===> receive [test1234567890test1234567890] INFO [org.springframework.kafka.KafkaListenerEndpointContainer#0-2-kafka-consumer-1] o.a.k.c.c.i.AbstractCoordinator.handle:623 - Attempt to heart beat failed since the group is rebalancing, try to re-join group. INFO [org.springframework.kafka.KafkaListenerEndpointContainer#0-2-kafka-consumer-1] o.s.k.l.KafkaMessageListenerContainer.onPartitionsRevoked:244 - partitions revoked:[] INFO [org.springframework.kafka.KafkaListenerEndpointContainer#0-2-kafka-consumer-1] o.s.k.l.KafkaMessageListenerContainer.onPartitionsRevoked:244 - partitions revoked:[] INFO [org.springframework.kafka.KafkaListenerEndpointContainer#0-1-kafka-consumer-1] o.a.k.c.c.i.AbstractCoordinator.handle:623 - Attempt to heart beat failed since the group is rebalancing, try to re-join group. INFO [org.springframework.kafka.KafkaListenerEndpointContainer#0-1-kafka-consumer-1] o.s.k.l.KafkaMessageListenerContainer.onPartitionsRevoked:244 - partitions revoked:[] INFO [org.springframework.kafka.KafkaListenerEndpointContainer#0-2-kafka-consumer-1] o.s.k.l.KafkaMessageListenerContainer.onPartitionsAssigned:249 - partitions assigned:[] INFO [org.springframework.kafka.KafkaListenerEndpointContainer#0-1-kafka-consumer-1] o.s.k.l.KafkaMessageListenerContainer.onPartitionsAssigned:249 - partitions assigned:[deviceMessageTopic-0] INFO [org.springframework.kafka.KafkaListenerEndpointContainer#0-1-kafka-consumer-1] m.h.b.p.listener.receiver.Receiver.onMessageListener:24 - ===> receive [test1234567890test1234567890] INFO [org.springframework.kafka.KafkaListenerEndpointContainer#0-2-kafka-consumer-1] o.a.k.c.c.i.AbstractCoordinator.handle:623 - Attempt to heart beat failed since the group is rebalancing, try to re-join group. INFO [org.springframework.kafka.KafkaListenerEndpointContainer#0-2-kafka-consumer-1] o.s.k.l.KafkaMessageListenerContainer.onPartitionsRevoked:244 - partitions revoked:[] INFO [org.springframework.kafka.KafkaListenerEndpointContainer#0-2-kafka-consumer-1] o.s.k.l.KafkaMessageListenerContainer.onPartitionsAssigned:249 - partitions assigned:[deviceMessageTopic-0] INFO [org.springframework.kafka.KafkaListenerEndpointContainer#0-2-kafka-consumer-1] m.h.b.p.listener.receiver.Receiver.onMessageListener:24 - ===> receive [test1234567890test1234567890] 看錯誤信息很容易理解是 kafka 發(fā)送心跳時發(fā)現(xiàn)正在給分區(qū)重新分配consumer導(dǎo)致發(fā)送失敗,嘗試重新加入group。 由于知道kafka執(zhí)行rebalance情況有以下幾種: 1:有新的consumer加入 2:舊的consumer掛了 3:coordinator掛了,集群選舉出新的coordinator 4:topic的partition新加 5:consumer調(diào)用unsubscrible(),取消topic的訂閱 再結(jié)合單服務(wù)時啟動這個情況,因此條件條件1和2是有可能符合的,但是仔細檢查配置,并沒有發(fā)現(xiàn)有不妥的地方。 于是又去Kafka中查看了Topic信息,只有一個分區(qū),也沒有異常。 如果哪位朋友看出了錯誤所在,懇請在此留下你的解決方式,如果對于該問題還想了解更多的細節(jié),也可能提出來,我會及時回復(fù),萬分感謝!
來源:開源中國
發(fā)布時間:2017-08-16 15:21:00
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> @Gaischen 你好,想跟你請教個問題:我公司和移動合作搞了一個項目叫IDC的,現(xiàn)在需要在kafka系統(tǒng)上獲取消息,也就是說是一個消費者客戶端,他們也有zookeeper,我想問,我現(xiàn)在要連接他們的系統(tǒng)獲取數(shù)據(jù),只需要像您的博客里面寫到寫一個消費者的demo即可拿到broker里面的數(shù)據(jù)了吧?不需要配置和安裝kafka吧?我的是windows環(huán)境的myeclipse8.5寫的demo來的。圖片的代碼為他們那邊提供的示例代碼,貌似和kafka上面的例子有些不同的。我不知道怎么修改他這些示例代碼。懇請幫忙解惑一下。小弟感激不盡。
來源:開源中國
發(fā)布時間:2015-12-01 18:29:00
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> 1、遠程服務(wù)器安裝的zookeeper和kafka,在服務(wù)器上啟動生產(chǎn)者和消費者之間是正常的可以通信 2、使用java寫kafka的客戶端,一直連接不上遠程kafka服務(wù)器 2018-08-15 13:14:57,471 (kafka-producer-network-thread | producer-1) [DEBUG - org.apache.kafka.common.network.Selector.poll(Selector.java:307)] Connection with /ip地址 disconnected java.net.ConnectException: Connection refused: no further information at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:54) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:72) at org.apache.kafka.common.network.Selector.poll(Selector.java:274) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128) at java.lang.Thread.run(Thread.java:748) 3、修改了server.properties文件,增加了 host.name=服務(wù)器ip 配置,啟動kafka報錯 [2018-08-15 14:01:14,781] ERROR [KafkaServer id=0] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) org.apache.kafka.common.KafkaException: Socket server failed to bind to 服務(wù)器ip:9092: Cannot assign requested address. at kafka.network.Acceptor.openServerSocket(SocketServer.scala:442) at kafka.network.Acceptor.(SocketServer.scala:332) at kafka.network.SocketServer.$anonfun$createAcceptorAndProcessors$1(SocketServer.scala:149) at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:59) at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:52) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at kafka.network.SocketServer.createAcceptorAndProcessors(SocketServer.scala:145) at kafka.network.SocketServer.startup(SocketServer.scala:94) at kafka.server.KafkaServer.startup(KafkaServer.scala:250) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38) at kafka.Kafka$.main(Kafka.scala:75) at kafka.Kafka.main(Kafka.scala) Caused by: java.net.BindException: Cannot assign requested address at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67) at kafka.network.Acceptor.openServerSocket(SocketServer.scala:438) ... 11 more 配置文件再沒有其他的改動了,求大神指點迷津~~~
來源:開源中國
發(fā)布時間:2018-08-15 14:13:00
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> kafka-console-consumer --topic kafar.test --bootstrap-server master:9092 /etc/kafka/conf/producer. properties
來源:開源中國
發(fā)布時間:2020-03-09 09:37:00
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> 能運行程序,也能連到服務(wù)器上,但是消費者監(jiān)聽沒有觸發(fā),不知道什么原因 我的kafka消費者文件配置,基礎(chǔ)連接的配置屬性那部分應(yīng)該沒問題,因為是可以連上的 運行后打印的日志,因為一直在循環(huán),所以只截了部分,類似于(Tsouche-gps-to-zgj-13, Tsouche-gps-to-zgj-9, Tsouche-gps-to-zgj-5, Tsouche-gps-to-zgj-25, Tsouche-gps-to-zgj-1, Tsouche-gps-to-zgj-29, Tsouche-gps-to-zgj-17, Tsouche-gps-to-zgj-21)中的數(shù)字是什么意思?前面的是我的用戶名。然后最后一行(Received: 0 records)是表示沒有任何數(shù)據(jù)從服務(wù)器取到還是已經(jīng)取到數(shù)據(jù)但是沒觸發(fā)監(jiān)聽? DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.Fetcher] - Sending READ_UNCOMMITTED fetch for partitions [Tsouche-gps-to-zgj-13, Tsouche-gps-to-zgj-9, Tsouche-gps-to-zgj-5, Tsouche-gps-to-zgj-25, Tsouche-gps-to-zgj-1, Tsouche-gps-to-zgj-29, Tsouche-gps-to-zgj-17, Tsouche-gps-to-zgj-21] to broker gk2.kafka.souche.com:11092 (id: 102 rack: null) 2018-11-30 17:40:18,634 DEBUG kafka-coordinator-heartbeat-thread | CID-T-souche-gps-zgj [org.apache.kafka.clients.consumer.internals.Fetcher] - Fetch READ_UNCOMMITTED at offset 0 for partition Tsouche-gps-to-zgj-26 returned fetch data (error=NONE, highWaterMark=0, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 2018-11-30 17:40:18,635 DEBUG kafka-coordinator-heartbeat-thread | CID-T-souche-gps-zgj [org.apache.kafka.clients.consumer.internals.Fetcher] - Fetch READ_UNCOMMITTED at offset 0 for partition Tsouche-gps-to-zgj-18 returned fetch data (error=NONE, highWaterMark=0, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 2018-11-30 17:40:18,635 DEBUG kafka-coordinator-heartbeat-thread | CID-T-souche-gps-zgj [org.apache.kafka.clients.consumer.internals.Fetcher] - Fetch READ_UNCOMMITTED at offset 0 for partition Tsouche-gps-to-zgj-22 returned fetch data (error=NONE, highWaterMark=0, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 2018-11-30 17:40:18,635 DEBUG kafka-coordinator-heartbeat-thread | CID-T-souche-gps-zgj [org.apache.kafka.clients.consumer.internals.Fetcher] - Fetch READ_UNCOMMITTED at offset 0 for partition Tsouche-gps-to-zgj-10 returned fetch data (error=NONE, highWaterMark=0, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 2018-11-30 17:40:18,635 DEBUG kafka-coordinator-heartbeat-thread | CID-T-souche-gps-zgj [org.apache.kafka.clients.consumer.internals.Fetcher] - Fetch READ_UNCOMMITTED at offset 0 for partition Tsouche-gps-to-zgj-14 returned fetch data (error=NONE, highWaterMark=0, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 2018-11-30 17:40:18,635 DEBUG kafka-coordinator-heartbeat-thread | CID-T-souche-gps-zgj [org.apache.kafka.clients.consumer.internals.Fetcher] - Fetch READ_UNCOMMITTED at offset 0 for partition Tsouche-gps-to-zgj-6 returned fetch data (error=NONE, highWaterMark=0, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 2018-11-30 17:40:18,635 DEBUG kafka-coordinator-heartbeat-thread | CID-T-souche-gps-zgj [org.apache.kafka.clients.consumer.internals.Fetcher] - Fetch READ_UNCOMMITTED at offset 0 for partition Tsouche-gps-to-zgj-2 returned fetch data (error=NONE, highWaterMark=0, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 2018-11-30 17:40:18,636 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.Fetcher] - Added READ_UNCOMMITTED fetch request for partition Tsouche-gps-to-zgj-26 at offset 0 to node gk3.kafka.souche.com:11092 (id: 103 rack: null) 2018-11-30 17:40:18,636 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.Fetcher] - Added READ_UNCOMMITTED fetch request for partition Tsouche-gps-to-zgj-18 at offset 0 to node gk3.kafka.souche.com:11092 (id: 103 rack: null) 2018-11-30 17:40:18,636 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.Fetcher] - Added READ_UNCOMMITTED fetch request for partition Tsouche-gps-to-zgj-22 at offset 0 to node gk3.kafka.souche.com:11092 (id: 103 rack: null) 2018-11-30 17:40:18,636 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.Fetcher] - Added READ_UNCOMMITTED fetch request for partition Tsouche-gps-to-zgj-10 at offset 0 to node gk3.kafka.souche.com:11092 (id: 103 rack: null) 2018-11-30 17:40:18,636 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.Fetcher] - Added READ_UNCOMMITTED fetch request for partition Tsouche-gps-to-zgj-14 at offset 0 to node gk3.kafka.souche.com:11092 (id: 103 rack: null) 2018-11-30 17:40:18,637 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.Fetcher] - Added READ_UNCOMMITTED fetch request for partition Tsouche-gps-to-zgj-6 at offset 0 to node gk3.kafka.souche.com:11092 (id: 103 rack: null) 2018-11-30 17:40:18,637 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.Fetcher] - Added READ_UNCOMMITTED fetch request for partition Tsouche-gps-to-zgj-2 at offset 0 to node gk3.kafka.souche.com:11092 (id: 103 rack: null) 2018-11-30 17:40:18,637 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.Fetcher] - Sending READ_UNCOMMITTED fetch for partitions [Tsouche-gps-to-zgj-26, Tsouche-gps-to-zgj-18, Tsouche-gps-to-zgj-22, Tsouche-gps-to-zgj-10, Tsouche-gps-to-zgj-14, Tsouche-gps-to-zgj-6, Tsouche-gps-to-zgj-2] to broker gk3.kafka.souche.com:11092 (id: 103 rack: null) 2018-11-30 17:40:18,657 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.Fetcher] - Fetch READ_UNCOMMITTED at offset 0 for partition Tsouche-gps-to-zgj-11 returned fetch data (error=NONE, highWaterMark=0, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 2018-11-30 17:40:18,657 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.Fetcher] - Fetch READ_UNCOMMITTED at offset 0 for partition Tsouche-gps-to-zgj-15 returned fetch data (error=NONE, highWaterMark=0, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 2018-11-30 17:40:18,657 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.Fetcher] - Fetch READ_UNCOMMITTED at offset 0 for partition Tsouche-gps-to-zgj-7 returned fetch data (error=NONE, highWaterMark=0, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 2018-11-30 17:40:18,657 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.Fetcher] - Fetch READ_UNCOMMITTED at offset 0 for partition Tsouche-gps-to-zgj-3 returned fetch data (error=NONE, highWaterMark=0, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 2018-11-30 17:40:18,657 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.Fetcher] - Fetch READ_UNCOMMITTED at offset 0 for partition Tsouche-gps-to-zgj-27 returned fetch data (error=NONE, highWaterMark=0, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 2018-11-30 17:40:18,657 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.Fetcher] - Fetch READ_UNCOMMITTED at offset 0 for partition Tsouche-gps-to-zgj-19 returned fetch data (error=NONE, highWaterMark=0, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 2018-11-30 17:40:18,657 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.Fetcher] - Fetch READ_UNCOMMITTED at offset 0 for partition Tsouche-gps-to-zgj-23 returned fetch data (error=NONE, highWaterMark=0, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0) 2018-11-30 17:40:18,658 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.Fetcher] - Added READ_UNCOMMITTED fetch request for partition Tsouche-gps-to-zgj-11 at offset 0 to node gk4.kafka.souche.com:11092 (id: 104 rack: null) 2018-11-30 17:40:18,658 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.Fetcher] - Added READ_UNCOMMITTED fetch request for partition Tsouche-gps-to-zgj-15 at offset 0 to node gk4.kafka.souche.com:11092 (id: 104 rack: null) 2018-11-30 17:40:18,658 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.Fetcher] - Added READ_UNCOMMITTED fetch request for partition Tsouche-gps-to-zgj-7 at offset 0 to node gk4.kafka.souche.com:11092 (id: 104 rack: null) 2018-11-30 17:40:18,658 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.Fetcher] - Added READ_UNCOMMITTED fetch request for partition Tsouche-gps-to-zgj-3 at offset 0 to node gk4.kafka.souche.com:11092 (id: 104 rack: null) 2018-11-30 17:40:18,659 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.Fetcher] - Added READ_UNCOMMITTED fetch request for partition Tsouche-gps-to-zgj-27 at offset 0 to node gk4.kafka.souche.com:11092 (id: 104 rack: null) 2018-11-30 17:40:18,659 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.Fetcher] - Added READ_UNCOMMITTED fetch request for partition Tsouche-gps-to-zgj-19 at offset 0 to node gk4.kafka.souche.com:11092 (id: 104 rack: null) 2018-11-30 17:40:18,659 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.Fetcher] - Added READ_UNCOMMITTED fetch request for partition Tsouche-gps-to-zgj-23 at offset 0 to node gk4.kafka.souche.com:11092 (id: 104 rack: null) 2018-11-30 17:40:18,659 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.Fetcher] - Sending READ_UNCOMMITTED fetch for partitions [Tsouche-gps-to-zgj-11, Tsouche-gps-to-zgj-15, Tsouche-gps-to-zgj-7, Tsouche-gps-to-zgj-3, Tsouche-gps-to-zgj-27, Tsouche-gps-to-zgj-19, Tsouche-gps-to-zgj-23] to broker gk4.kafka.souche.com:11092 (id: 104 rack: null) 2018-11-30 17:40:18,717 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - Sending asynchronous auto-commit of offsets {Tsouche-gps-to-zgj-24=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-28=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-11=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-0=OffsetAndMetadata{offset=57044, metadata=''}, Tsouche-gps-to-zgj-16=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-13=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-26=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-15=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-9=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-7=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-5=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-18=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-20=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-3=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-22=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-25=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-1=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-10=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-29=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-14=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-27=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-12=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-17=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-8=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-19=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-4=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-6=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-23=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-2=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-21=OffsetAndMetadata{offset=0, metadata=''}} for group CID-T-souche-gps-zgj 2018-11-30 17:40:18,727 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - Group CID-T-souche-gps-zgj committed offset 0 for partition Tsouche-gps-to-zgj-24 2018-11-30 17:40:18,727 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - Group CID-T-souche-gps-zgj committed offset 0 for partition Tsouche-gps-to-zgj-28 2018-11-30 17:40:18,727 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - Group CID-T-souche-gps-zgj committed offset 0 for partition Tsouche-gps-to-zgj-11 2018-11-30 17:40:18,727 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - Group CID-T-souche-gps-zgj committed offset 57044 for partition Tsouche-gps-to-zgj-0 2018-11-30 17:40:18,727 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - Group CID-T-souche-gps-zgj committed offset 0 for partition Tsouche-gps-to-zgj-13 2018-11-30 17:40:18,727 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - Group CID-T-souche-gps-zgj committed offset 0 for partition Tsouche-gps-to-zgj-16 2018-11-30 17:40:18,727 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - Group CID-T-souche-gps-zgj committed offset 0 for partition Tsouche-gps-to-zgj-26 2018-11-30 17:40:18,727 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - Group CID-T-souche-gps-zgj committed offset 0 for partition Tsouche-gps-to-zgj-15 2018-11-30 17:40:18,727 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - Group CID-T-souche-gps-zgj committed offset 0 for partition Tsouche-gps-to-zgj-9 2018-11-30 17:40:18,727 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - Group CID-T-souche-gps-zgj committed offset 0 for partition Tsouche-gps-to-zgj-7 2018-11-30 17:40:18,727 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - Group CID-T-souche-gps-zgj committed offset 0 for partition Tsouche-gps-to-zgj-5 2018-11-30 17:40:18,727 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - Group CID-T-souche-gps-zgj committed offset 0 for partition Tsouche-gps-to-zgj-18 2018-11-30 17:40:18,727 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - Group CID-T-souche-gps-zgj committed offset 0 for partition Tsouche-gps-to-zgj-20 2018-11-30 17:40:18,727 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - Group CID-T-souche-gps-zgj committed offset 0 for partition Tsouche-gps-to-zgj-3 2018-11-30 17:40:18,727 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - Group CID-T-souche-gps-zgj committed offset 0 for partition Tsouche-gps-to-zgj-22 2018-11-30 17:40:18,727 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - Group CID-T-souche-gps-zgj committed offset 0 for partition Tsouche-gps-to-zgj-25 2018-11-30 17:40:18,727 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - Group CID-T-souche-gps-zgj committed offset 0 for partition Tsouche-gps-to-zgj-1 2018-11-30 17:40:18,727 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - Group CID-T-souche-gps-zgj committed offset 0 for partition Tsouche-gps-to-zgj-10 2018-11-30 17:40:18,727 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - Group CID-T-souche-gps-zgj committed offset 0 for partition Tsouche-gps-to-zgj-29 2018-11-30 17:40:18,727 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - Group CID-T-souche-gps-zgj committed offset 0 for partition Tsouche-gps-to-zgj-14 2018-11-30 17:40:18,727 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - Group CID-T-souche-gps-zgj committed offset 0 for partition Tsouche-gps-to-zgj-27 2018-11-30 17:40:18,727 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - Group CID-T-souche-gps-zgj committed offset 0 for partition Tsouche-gps-to-zgj-12 2018-11-30 17:40:18,727 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - Group CID-T-souche-gps-zgj committed offset 0 for partition Tsouche-gps-to-zgj-17 2018-11-30 17:40:18,727 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - Group CID-T-souche-gps-zgj committed offset 0 for partition Tsouche-gps-to-zgj-8 2018-11-30 17:40:18,727 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - Group CID-T-souche-gps-zgj committed offset 0 for partition Tsouche-gps-to-zgj-19 2018-11-30 17:40:18,727 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - Group CID-T-souche-gps-zgj committed offset 0 for partition Tsouche-gps-to-zgj-4 2018-11-30 17:40:18,727 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - Group CID-T-souche-gps-zgj committed offset 0 for partition Tsouche-gps-to-zgj-6 2018-11-30 17:40:18,727 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - Group CID-T-souche-gps-zgj committed offset 0 for partition Tsouche-gps-to-zgj-23 2018-11-30 17:40:18,728 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - Group CID-T-souche-gps-zgj committed offset 0 for partition Tsouche-gps-to-zgj-2 2018-11-30 17:40:18,728 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - Group CID-T-souche-gps-zgj committed offset 0 for partition Tsouche-gps-to-zgj-21 2018-11-30 17:40:18,728 DEBUG messageListenerContainer-0-C-1 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - Completed auto-commit of offsets {Tsouche-gps-to-zgj-24=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-28=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-11=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-0=OffsetAndMetadata{offset=57044, metadata=''}, Tsouche-gps-to-zgj-16=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-13=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-26=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-15=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-9=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-7=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-5=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-18=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-20=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-3=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-22=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-25=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-1=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-10=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-29=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-14=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-27=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-12=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-17=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-8=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-19=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-4=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-6=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-23=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-2=OffsetAndMetadata{offset=0, metadata=''}, Tsouche-gps-to-zgj-21=OffsetAndMetadata{offset=0, metadata=''}} for group CID-T-souche-gps-zgj 2018-11-30 17:40:18,772 DEBUG messageListenerContainer-0-C-1 [org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer] - Received: 0 records
來源:開源中國
發(fā)布時間:2018-11-30 17:52:00
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> win10下如何使用docker安裝ZooKeeper、Flink、Kafka? 各位前輩能給出一些win10安裝ZooKeeper、Flink、Kafka的教程嗎?
來源:開源中國
發(fā)布時間:2019-12-17 09:15:05
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> kafka(版本 0.10)已消費過的數(shù)據(jù)重復(fù)消費,而且是3天前的,zk的log未發(fā)現(xiàn)異常 三臺機器分布式消費,下面是消費端配置 props.put("auto.commit.enable", "true"); props.put("auto.commit.interval.ms", "1000"); props.put("max.poll.records", "1"); props.put("auto.offset.reset", "earliest"); 急,請教!??!
來源:開源中國
發(fā)布時間:2017-02-13 17:23:00
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> 現(xiàn)在遇到了一個kafka集群分區(qū)擴容問題,詳細情況為: 我現(xiàn)在kafka 集群為5臺機器,現(xiàn)在的主題中分區(qū)數(shù)為8個,但是沒有添加新的機器,只是想將現(xiàn)在的分區(qū)數(shù)改為10個,利用kafka自帶的腳本,執(zhí)行生產(chǎn)遷移計劃、執(zhí)行遷移計劃、驗證遷移計劃,這些步驟都能成功,但是在kafka tools工具里面查看新增加的分區(qū)數(shù)和原來分區(qū)數(shù)據(jù)總量差距很大,且在消費的時候差距也很大。吞吐量嚴重下降,有大神嗎知道怎么修改嗎? 注意:其中的8/9為新增加的分區(qū),0-7為舊分區(qū)。
來源:開源中國
發(fā)布時間:2019-06-22 15:12:00
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> 項目需要向多個租戶下推送消息,但是總不能每個租戶都創(chuàng)建一個topic吧?如果都同時丟在同一個topic里面,那么消息量少的租戶可能被消息量大的租戶阻塞,比如:客戶端拉10次只能拉到一次小租戶的消息。請問如何進行隔離?
來源:開源中國
發(fā)布時間:2019-12-12 18:53:00
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> 用httpclient模仿請求一個網(wǎng)站,獲取得到的網(wǎng)頁內(nèi)容中都是iframe,但頁面上面可以看到顯示的內(nèi)容,怎么獲取iframe里面的內(nèi)容啊
來源:開源中國
發(fā)布時間:2013-11-06 16:43:00
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> 使用jsoup讀取URL,解析內(nèi)容。但是遇到獲得文檔不全。 1. 設(shè)置maxBodySize(0)不起作用,后來在http://www.wityx.com/post/288_1_1.html 幫助下獲取到完整的內(nèi)容 2. 賦值時發(fā)現(xiàn),某項數(shù)據(jù),在debug時我可以獲取到,但不debug時卻獲取不到。后來在 https://blog.csdn.net/weixin_34130389/article/details/85887340 的啟發(fā)下,每一次jsoup的connection連接賦值前,都線程休眠1s,仍不行;2s秒;還不行??傊?jīng)過各種嘗試,后來發(fā)現(xiàn)問題是在jsoup獲取到內(nèi)容之后,設(shè)置線程休眠5s即可。原因未知。 Jsoup.connect(url4).timeout(8000); Jsoup.connect(url4).method(Connection.Method.GET); Jsoup.connect(url4).maxBodySize(0); Jsoup.connect(url4).followRedirects(false); Connection.Response resp = Jsoup.connect(url4).execute(); try { Thread.sleep(5000);//在jsoup獲取到內(nèi)容之后,經(jīng)過5s在解析即可 } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("URL4》》"+url4); Document document2 = resp.parse();
來源:開源中國
發(fā)布時間:2019-09-09 20:26:00
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> public void test() throws Exception { String str="wefwef"; Whitelist user_content_filter = Whitelist.basicWithImages(); System.out.println(Jsoup.clean(str,user_content_filter)); } 很簡單 。出現(xiàn)的問題 允許img 標(biāo)簽 ,img標(biāo)簽沒喲被清除 但是src屬性被清除了 。大家知道為什么嗎?
來源:開源中國
發(fā)布時間:2011-05-05 11:50:00
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> 之前看過紅薯的一個帖子 這么寫是否正確 Document doc = Jsoup.connect(homepage).userAgent("Mozilla/5.0 (Windows NT 6.1; rv:5.0)").cookie("auth", "token").timeout(1000).get(); 請大家不吝賜教 謝謝 錯誤代碼如下 時而connet time out 時而read time out Exception in thread "main" java.net.SocketTimeoutException: Read timed out at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:129) at java.io.BufferedInputStream.fill(BufferedInputStream.java:218) at java.io.BufferedInputStream.read1(BufferedInputStream.java:258) at java.io.BufferedInputStream.read(BufferedInputStream.java:317) at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:687) at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:632) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1195) at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:379) at org.jsoup.helper.HttpConnection$Response.execute(HttpConnection.java:354) at org.jsoup.helper.HttpConnection$Response.execute(HttpConnection.java:337) at org.jsoup.helper.HttpConnection.execute(HttpConnection.java:135) at org.jsoup.helper.HttpConnection.get(HttpConnection.java:124
來源:開源中國
發(fā)布時間:2011-06-26 17:31:00
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> jsoup讀取的結(jié)果如下:
而我在FireFox瀏覽器中看到的結(jié)果卻是這樣的
這個怎么理解? 初步確定為是jsoup抓取的是js加載之前的頁面
來源:開源中國
發(fā)布時間:2012-07-18 10:08:00
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> 也就是保留指定的html標(biāo)簽和里面的css,我剛測試了一下 String safe = Jsoup.clean(unsafe, Whitelist.basic()); 這樣把css也清除掉了,有什么好的方法不清除css嗎
來源:開源中國
發(fā)布時間:2012-08-22 09:48:00
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> OSChina 今天早上剛剛對代碼進行了改造,使用 jsoup 替換原有的 Htmlparser 來對包括發(fā)帖、回帖和評論等內(nèi)容進行安全過濾。 過濾的條件也比原來的要嚴格很多,主要是為了避免一些跨站點的腳本攻擊。 如果在使用中遇見某些內(nèi)容輸入有誤,例如某些標(biāo)簽或者屬性被過濾掉,請告知于我,我將酌情處理。 下面是 OSChina 對輸入內(nèi)容進行過濾的代碼: private final static Whitelist user_content_filter = Whitelist.relaxed(); static { user_content_filter.addTags("embed","object","param","span","div"); user_content_filter.addAttributes(":all", "style", "class", "id", "name"); user_content_filter.addAttributes("object", "width", "height","classid","codebase"); user_content_filter.addAttributes("param", "name", "value"); user_content_filter.addAttributes("embed", "src","quality","width","height","allowFullScreen","allowScriptAccess","flashvars","name","type","pluginspage"); } /** * 對用戶輸入內(nèi)容進行過濾 * @param html * @return */ public static String filterUserInputContent(String html) { if(StringUtils.isBlank(html)) return ""; return Jsoup.clean(html, user_content_filter); //return filterScriptAndStyle(html); } 完!
來源:開源中國
發(fā)布時間:2010-08-05 09:58:00
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> Jsoup怎么解析頁面中的示例代碼 比如頁面中有一段示例代碼顯示為: 抓取完還想顯示成這樣,但實際會把里面加上換行 空格和把半角雙引號轉(zhuǎn)為全角雙引號等,怎么樣能原樣抓取下來呢,請高手指點,謝謝! 實際上要想的是像提取Cnblogs文章一樣,里面有代碼,也有文字,想原樣提取出來,在別處顯示也正常
來源:開源中國
發(fā)布時間:2018-02-02 11:14:00
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> 如何在jsoup.connect 中 提交一個json 參數(shù)呢
來源:開源中國
發(fā)布時間:2015-09-03 20:27:00
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> JSoup 不只是一個 HTML 的解析器,它自帶的 HTTP 客戶端包非常好用,而且很簡單,至少比 HttpClient 要簡單好多。 但是在使用過程中發(fā)現(xiàn)在讀取一些內(nèi)容很大的不管文本或者圖片時都會被截斷。凈研究發(fā)現(xiàn)默認 JSoup 的限制是 1024*1024,也就是 1M 的大小。 因此我們需要在連接時設(shè)置一下 maxBodySize ,具體方法如下: Document = Jsoup.connect(url) .header("Accept-Encoding", "gzip, deflate") .userAgent("Mozilla/5.0 (Windows NT 6.1; WOW64; rv:23.0) Gecko/20100101 Firefox/23.0") .maxBodySize(0) .timeout(600000) .get(); 設(shè)置為 0 表示不限制大小,不過謹慎使用哦:)
來源:開源中國
發(fā)布時間:2017-11-24 20:11:00
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> 需求是這樣的,做了一個新聞客戶端,想用WEBVIEW顯示新聞的內(nèi)容,所以就涉及到HTML的解析,截取HTML里自己想要的內(nèi)容,網(wǎng)上看了下 JSOUP 解析HTML挺方便的,但是關(guān)于JSOUP的教程太少了,只能靠官方的文檔,目前 新聞內(nèi)容已經(jīng)獲取到了,但是有部分新聞不止一頁,還有下一頁,下一頁等,如果遇到這種新聞,則內(nèi)容DIV的標(biāo)簽下 就會多出很多標(biāo)簽,如新聞轉(zhuǎn)發(fā)的,更多新聞的等等 。HTML代碼如下:------這里是新聞內(nèi)容
 的結(jié)束標(biāo)簽,我想把里面的CLASS都去掉,不知道如何寫。 JAVA代碼如下: try { if(inStream == null){ Toast.makeText(NewsShowActivity.this, "內(nèi)容不存在,或網(wǎng)絡(luò)連接錯誤", 2000).show(); } if(inStream != null){ doc = Jsoup.parse(inStream, "gb2312", " http://www.xxx.com/ "); } } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } content_nr = doc.getElementById("content"); //根據(jù)HTML里的ID獲取內(nèi)容 //這里怎么刪除
標(biāo)簽下無用的內(nèi)容呢? 求朋友幫幫忙哈。不勝感激! webView.loadData(content_nr.html(), "text/html", "UTF-8");
來源:開源中國
發(fā)布時間:2012-04-20 11:24:00
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> 有人知道為什么我用jsoup獲取這個頁面一直都是亂碼,編碼都嘗試了一遍都不行 http://sports.xinmin.cn/2013/10/27/22446248.html 有人有時間的話幫忙看看? public static String readHtml(String myurl) { StringBuffer sb = new StringBuffer(""); URL url; try { url = new URL(myurl); BufferedReader br = new BufferedReader(new InputStreamReader(url.openStream(), "UTF-8")); String s = ""; while ((s = br.readLine()) != null) { sb.append(s + "\r\n"); } } catch (Exception e) { e.printStackTrace(); } return sb.toString(); }
來源:開源中國
發(fā)布時間:2013-10-29 13:27:00
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> String html = "

"; System.out.println(Jsoup.parse(html).text()); 控制臺輸出為 ??? 求助該如何解決??
來源:開源中國
發(fā)布時間:2012-07-25 22:42:00
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> Document doc = Jsoup.connect(URL)timeout(1000).get(); String docStr = doc.toString(); String str = new String(docStr.getBytes("ISO8859-1"), "UTF-8"); Document document = Jsoup.parse(str); 這樣做什么地方有問題呢 采集信息出現(xiàn)亂碼,我真是努力了,看了好幾個文章都沒解決這個問題
來源:開源中國
發(fā)布時間:2012-11-20 21:59:00
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> 我JSOUP代碼: URL url_1 = new URL(filepath); Document text = Jsoup.parse(url_1,5*1000); System.out.println(text.html()); 請高人幫忙
來源:開源中國
發(fā)布時間:2011-01-14 10:51:00
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>>
XX
利用轉(zhuǎn)換為
XX
如何破?
來源:開源中國
發(fā)布時間:2014-11-17 15:33:00
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> java編程,利用jsoup提取網(wǎng)頁表格,判斷表格是否是規(guī)則的幾行幾列,連接數(shù)據(jù)庫建立相應(yīng)的表格。希望大神能夠幫忙解決。 以下是我寫的一點代碼,能在原基礎(chǔ)上修改,實現(xiàn)判斷表格的行數(shù)和列數(shù)。(數(shù)據(jù)庫部分還沒有寫) package html2; import java.io.IOException; import java.text.ParseException; import org.jsoup.Jsoup; import org.jsoup.nodes.Document; import org.jsoup.select.Elements; public class test1 { public static void main(String[] args) throws ParseException { try{ String url="http://cxxy.seu.edu.cn"; Document document= Jsoup.connect(url).get(); Elements hang = document.select("table").select("tr"); int rows=hang.size(); for (int i = 0; i < rows; i++) { Elements lie=hang.get(i).select("td"); int lines=lie.size(); for(int j=i;j
來源:開源中國
發(fā)布時間:2017-03-30 17:09:00
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> 我的正則是這個 String Regular = "([^>]*>){3}(?[^<]*)([^>]*>){3}(?[^<]*)([^>]*>){6}(?[^<]*)([^>]*>){3}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)"; String Regular = "([^>]*>){3}(?[^<]*)([^>]*>){3}(?[^<]*)([^>]*>){6}(?[^<]*)([^>]*>){3}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)([^>]*>){2}(?[^<]*)"; 然后爆了這么一個錯 Exception in thread "main" java.util.regex.PatternSyntaxException: named capturing group is missing trailing '>' near index 259
來源:開源中國
發(fā)布時間:2017-03-13 11:26:00
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> 我是剛試著用jsoup的菜鳥。我想請教個問題,做android開發(fā)時,為什么我 下面的代碼,總是直接到catch么,用Jsoup.connect("http:/www.baidu.com").get();也是失敗。用jsoup有什么需要注意的么? package com.lzc.abckk; import java.io.IOException; import java.net.URL; import org.apache.http.client.entity.UrlEncodedFormEntity; import org.jsoup.Jsoup; import org.jsoup.nodes.Document; import org.jsoup.nodes.Element; import org.jsoup.select.Elements; //import android.widget.TextView; import android.os.Bundle; import android.app.Activity; import android.view.Menu; public class MainActivity extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); System.out.println("dddddddddddddddddddddddd"); try { Document doc = Jsoup.parse(new URL("HTTP://WWW.BAIDU.COM"),5000); } catch (Exception e) { // TODO: handle exception } setContentView(R.layout.activity_main); } @Override public boolean onCreateOptionsMenu(Menu menu) { // Inflate the menu; this adds items to the action bar if it is present. getMenuInflater().inflate(R.menu.main, menu); return true; } }
來源:開源中國
發(fā)布時間:2013-06-20 22:19:07
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> 如題,求推薦性價比高的wordpress主機,最好是香港的。另外,請教各位,windows上wordpress運行得如何?速度不如linux主機嗎?
來源:開源中國
發(fā)布時間:2013-04-29 19:39:00
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> @紅薯 你好,想跟你請教個問題:我是技術(shù)小白,但是我想在貴壇子里找一個能幫我用WP建站的大牛,不是外貿(mào)網(wǎng)站,是國內(nèi)中文網(wǎng)站。百度上找了一圈,要么就是建外貿(mào)網(wǎng)站的,要么就是用模版的,覺得他們太LOW,特來此尋求您的幫助,謝謝
來源:開源中國
發(fā)布時間:2015-04-25 20:00:00
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> 語言php,接觸程序八個月了,還一直沒建起來自己的網(wǎng)站,完全手工搭建網(wǎng)站沒問題(代碼質(zhì)量肯定高不到哪里去..),wordpress的代碼看著很是吃力,希望聽聽大牛的指導(dǎo):是手工搭建還是使用軟件建站,從提高代碼水平角度出發(fā).
來源:開源中國
發(fā)布時間:2014-05-21 09:28:00
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> ------計劃目標(biāo)------ 網(wǎng)站能顯示所有Ping B服務(wù)器得出的所有結(jié)果(IP表及網(wǎng)站在A服務(wù)器) ------計劃器材------ A服務(wù)器(香港) - 保存被Ping的IP列表+網(wǎng)站 B服務(wù)器(國內(nèi)) - 主動去Ping A服務(wù)器的IP列內(nèi)各粒 網(wǎng)站(A服務(wù)器內(nèi)) ------計劃做法------ 網(wǎng)站打開 --> 自動要求A服務(wù)器將IP列表傳給B服務(wù)器 --> B服務(wù)器收到并進行Ping --> 將IP列表內(nèi)所有Ping的結(jié)果傳回給A服務(wù)器 --> A服務(wù)器將結(jié)果上載到網(wǎng)站并顯示給用家 ------遇到難點------ 如何令A(yù)PI結(jié)果上載到網(wǎng)站并顯示? ------期望獲得------ 有什么技術(shù)可以做到? 如何做到?
來源:開源中國
發(fā)布時間:2019-11-09 21:23:00
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> 部署了一個wordpress,使用了 Enlighter - Customizable Syntax Highlighter 這個插件,但是代碼太長被自動換行了,如下圖所示,不想讓第50行被換行 這個插件本身有選項,可以忽略自動換行的,但是不管用,哪位高手碰到類似的問題,麻煩告知下哈
來源:開源中國
發(fā)布時間:2019-04-17 11:41:00
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> 我在使用POI解析word文檔時,發(fā)現(xiàn)了一個問題,POI在解析圖片、表格和文本時是分開的,因此,當(dāng)我解析完圖片、文本、表格后,就不知道圖片在文本的那個位置。如下: public static List getWordContent2007(String path) throws IOException, XmlException, OpenXML4JException { InputStream is = new FileInputStream(path); List list = new ArrayList(); XWPFDocument doc1 = new XWPFDocument(is); List paras = doc1.getParagraphs(); // 創(chuàng)建圖片容器 List picEs = doc1.getAllPictures(); for (XWPFPictureData pic : picEs) { System.out.println(pic.getPictureType() + File.separator + pic.suggestFileExtension() + File.separator + pic.getFileName()); byte[] bytev = pic.getData(); FileOutputStream fos = new FileOutputStream("G:\\test\\" + pic.getFileName()); fos.write(bytev); fos.close(); } for (XWPFParagraph graph : paras) { String text = graph.getParagraphText(); String style = graph.getStyle(); // System.out.println(style); if ("1".equals(style)) { // System.out.println(text + "--[" + style + "]"); } else if ("2".equals(style)) { // System.out.println(text + "--[" + style + "]"); } else if ("3".equals(style)) { // System.out.println(text + "--[" + style + "]"); } else { System.out.println(graph.getPictureText()); } list.add(text); } return list; } 我獲取了圖片和內(nèi)容后,想把他們按照word文件中的內(nèi)容順序組合在一起,可是 我不知道怎么獲取圖片和文本的位置關(guān)系。
來源:開源中國
發(fā)布時間:2016-11-21 17:49:00
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> 環(huán)境: ubuntu16.04 通過apt安裝的php7 和nginx 1.10, 配置好后php無法解析,如圖: nginx配置如下: 請問是我哪個地方配置錯了?
來源:開源中國
發(fā)布時間:2018-05-29 09:48:00
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> index是用到索引了嗎?感覺還是速度很慢,和全表掃描有什么區(qū)別呢?另外range算是用到索引了嗎?
來源:開源中國
發(fā)布時間:2018-05-15 14:22:00
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>> 比如用各種前端的html編輯器(例如ckeditor)寫的東西,怎么防止XSS
來源:開源中國
發(fā)布時間:2018-04-13 11:05:00