HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>>
kafka集群,版本是0.8.2.0:hetserver1,hetserver,hetserver3三臺主機
hetserver1在17號 21:39開始報錯
2020-03-17 21:39:00,040 ERROR kafka.network.Processor: Closing socket for /172.19.4.12 because of error
java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:487)
at kafka.api.TopicDataSend.writeTo(FetchResponse.scala:123)
at kafka.network.MultiSend.writeTo(Transmission.scala:101)
at kafka.api.FetchResponseSend.writeTo(FetchResponse.scala:231)
at kafka.network.Processor.write(SocketServer.scala:473)
at kafka.network.Processor.run(SocketServer.scala:343)
at java.lang.Thread.run(Thread.java:745)
2020-03-17 21:39:00,071 ERROR kafka.network.Processor: Closing socket for /172.19.4.12 because of error
java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:487)
at kafka.api.TopicDataSend.writeTo(FetchResponse.scala:123)
at kafka.network.MultiSend.writeTo(Transmission.scala:101)
at kafka.api.FetchResponseSend.writeTo(FetchResponse.scala:231)
at kafka.network.Processor.write(SocketServer.scala:473)
at kafka.network.Processor.run(SocketServer.scala:343)
at java.lang.Thread.run(Thread.java:745)
2020-03-17 21:39:00,073 ERROR kafka.network.Processor: Closing socket for /172.19.4.12 because of error
java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:487)
at kafka.api.TopicDataSend.writeTo(FetchResponse.scala:123)
at kafka.network.MultiSend.writeTo(Transmission.scala:101)
at kafka.api.FetchResponseSend.writeTo(FetchResponse.scala:231)
at kafka.network.Processor.write(SocketServer.scala:473)
at kafka.network.Processor.run(SocketServer.scala:343)
at java.lang.Thread.run(Thread.java:745)
hetserver2也報錯:
2020-03-17 21:39:00,193 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server hetserver1/172.19.4.12:2181. Will not attempt to authenticate using SASL (unknown error)
2020-03-17 21:39:00,194 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hetserver1/172.19.4.12:2181, initiating session
2020-03-17 21:39:00,194 INFO org.I0Itec.zkclient.ZkClient: zookeeper state changed (Expired)
2020-03-17 21:39:00,195 INFO org.apache.zookeeper.ClientCnxn: Unable to reconnect to ZooKeeper service, session 0xa70e4797252000e has expired, closing socket connection
2020-03-17 21:39:00,195 INFO org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=hetserver1:2181,hetserver2:2181,hetserver3:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@5e39570d
2020-03-17 21:39:00,196 INFO org.apache.zookeeper.ClientCnxn: EventThread shut down
2020-03-17 21:39:00,196 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server hetserver3/172.19.4.14:2181. Will not attempt to authenticate using SASL (unknown error)
2020-03-17 21:39:00,197 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hetserver3/172.19.4.14:2181, initiating session
2020-03-17 21:39:00,198 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server hetserver3/172.19.4.14:2181, sessionid = 0x870e479725d0517, negotiated timeout = 6000
2020-03-17 21:39:00,198 INFO org.I0Itec.zkclient.ZkClient: zookeeper state changed (SyncConnected)
2020-03-17 21:39:00,297 INFO kafka.controller.ReplicaStateMachine$BrokerChangeListener: [BrokerChangeListener on Controller 41]: Broker change listener fired for path /brokers/ids with children 42
2020-03-17 21:39:00,301 INFO kafka.controller.ReplicaStateMachine$BrokerChangeListener: [BrokerChangeListener on Controller 41]: Newly added brokers: , deleted brokers: 41,40, all live brokers: 42
2020-03-17 21:39:00,301 INFO kafka.controller.RequestSendThread: [Controller-41-to-broker-41-send-thread], Shutting down
2020-03-17 21:39:00,301 INFO kafka.controller.RequestSendThread: [Controller-41-to-broker-41-send-thread], Stopped
2020-03-17 21:39:00,301 INFO kafka.controller.RequestSendThread: [Controller-41-to-broker-41-send-thread], Shutdown completed
2020-03-17 21:39:00,302 INFO kafka.controller.RequestSendThread: [Controller-41-to-broker-40-send-thread], Shutting down
2020-03-17 21:39:00,302 INFO kafka.controller.RequestSendThread: [Controller-41-to-broker-40-send-thread], Stopped
2020-03-17 21:39:00,302 INFO kafka.controller.RequestSendThread: [Controller-41-to-broker-40-send-thread], Shutdown completed
2020-03-17 21:39:00,304 INFO kafka.controller.KafkaController: [Controller 41]: Broker failure callback for 41,40
2020-03-17 21:39:00,306 INFO kafka.controller.KafkaController: [Controller 41]: Removed ArrayBuffer() from list of shutting down brokers.
2020-03-17 21:39:00,308 INFO kafka.controller.PartitionStateMachine: [Partition state machine on Controller 41]: Invoking state change to OfflinePartition for partitions [__consumer_offsets,19],[__consumer_offsets,47],[__consumer_offsets,41],[__consumer_offsets,29],[session-location,0],[__consumer_offsets,17],[__consumer_offsets,10],[hetASUPfldTopic,0],[__consumer_offsets,14],[__consumer_offsets,40],[hetACDMTopic,0],[__consumer_offsets,26],[__consumer_offsets,20],[__consumer_offsets,22],[__consumer_offsets,5],[push-result-error,0],[__consumer_offsets,8],[__consumer_offsets,23],[__consumer_offsets,11],[hetAsupMsgTopic,0],[__consumer_offsets,13],[__consumer_offsets,49],[__consumer_offsets,28],[__consumer_offsets,4],[__consumer_offsets,37],[__consumer_offsets,44],[__consumer_offsets,31],[__consumer_offsets,34],[__consumer_offsets,46],[btTaskTopic,0],[__consumer_offsets,25],[__consumer_offsets,43],[__consumer_offsets,32],[__consumer_offsets,35],[__consumer_offsets,7],[__consumer_offsets,38],[__consumer_offsets,1],[HetPetaTopic,0],[__consumer_offsets,2],[__consumer_offsets,16]
2020-03-17 21:39:00,314 ERROR state.change.logger: Controller 41 epoch 57126 aborted leader election for partition [__consumer_offsets,19] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 41 went through a soft failure and another controller was elected with epoch 57127.
2020-03-17 21:39:00,314 ERROR state.change.logger: Controller 41 epoch 57126 encountered error while electing leader for partition [__consumer_offsets,19] due to: aborted leader election for partition [__consumer_offsets,19] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 41 went through a soft failure and another controller was elected with epoch 57127..
2020-03-17 21:39:00,314 ERROR state.change.logger: Controller 41 epoch 57126 initiated state change for partition [__consumer_offsets,19] from OfflinePartition to OnlinePartition failed
kafka.common.StateChangeFailedException: encountered error while electing leader for partition [__consumer_offsets,19] due to: aborted leader election for partition [__consumer_offsets,19] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 41 went through a soft failure and another controller was elected with epoch 57127..
at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:380)
at kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:206)
at kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:120)
at kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:117)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
at kafka.controller.PartitionStateMachine.triggerOnlinePartitionStateChange(PartitionStateMachine.scala:117)
at kafka.controller.KafkaController.onBrokerFailure(KafkaController.scala:453)
at kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ReplicaStateMachine.scala:373)
at kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:359)
at kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:359)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply$mcV$sp(ReplicaStateMachine.scala:358)
at kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:357)
at kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:357)
at kafka.utils.Utils$.inLock(Utils.scala:561)
at kafka.controller.ReplicaStateMachine$BrokerChangeListener.handleChildChange(ReplicaStateMachine.scala:356)
at org.I0Itec.zkclient.ZkClient$7.run(ZkClient.java:568)
at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
Caused by: kafka.common.StateChangeFailedException: aborted leader election for partition [__consumer_offsets,19] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 41 went through a soft failure and another controller was elected with epoch 57127.
at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:354)
... 23 more
hetserver3報錯信息如下:
2020-03-17 21:39:00,660 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server hetserver3/172.19.4.14:2181. Will not attempt to authenticate using SASL (unknown error)
2020-03-17 21:39:00,661 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hetserver3/172.19.4.14:2181, initiating session
2020-03-17 21:39:00,661 INFO org.I0Itec.zkclient.ZkClient: zookeeper state changed (Expired)
2020-03-17 21:39:00,661 INFO org.apache.zookeeper.ClientCnxn: Unable to reconnect to ZooKeeper service, session 0x870e47715360000 has expired, closing socket connection
2020-03-17 21:39:00,661 INFO org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=hetserver1:2181,hetserver2:2181,hetserver3:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@68246cf
2020-03-17 21:39:00,663 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server hetserver2/172.19.4.13:2181. Will not attempt to authenticate using SASL (unknown error)
2020-03-17 21:39:00,664 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hetserver2/172.19.4.13:2181, initiating session
2020-03-17 21:39:00,666 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server hetserver2/172.19.4.13:2181, sessionid = 0xa70e47972520504, negotiated timeout = 6000
2020-03-17 21:39:00,666 INFO org.I0Itec.zkclient.ZkClient: zookeeper state changed (SyncConnected)
2020-03-17 21:39:00,666 INFO org.apache.zookeeper.ClientCnxn: EventThread shut down
2020-03-17 21:39:00,763 INFO kafka.controller.ReplicaStateMachine$BrokerChangeListener: [BrokerChangeListener on Controller 40]: Broker change listener fired for path /brokers/ids with children 42
2020-03-17 21:39:00,764 INFO kafka.controller.ReplicaStateMachine$BrokerChangeListener: [BrokerChangeListener on Controller 40]: Newly added brokers: , deleted brokers: 41,40, all live brokers: 42
2020-03-17 21:39:00,764 INFO kafka.controller.RequestSendThread: [Controller-40-to-broker-41-send-thread], Shutting down
2020-03-17 21:39:00,764 INFO kafka.controller.RequestSendThread: [Controller-40-to-broker-41-send-thread], Stopped
2020-03-17 21:39:00,764 INFO kafka.controller.RequestSendThread: [Controller-40-to-broker-41-send-thread], Shutdown completed
2020-03-17 21:39:00,764 INFO kafka.controller.RequestSendThread: [Controller-40-to-broker-40-send-thread], Shutting down
2020-03-17 21:39:00,764 INFO kafka.network.Processor: Closing socket connection to /172.19.4.14.
2020-03-17 21:39:00,764 INFO kafka.controller.RequestSendThread: [Controller-40-to-broker-40-send-thread], Stopped
2020-03-17 21:39:00,764 INFO kafka.controller.RequestSendThread: [Controller-40-to-broker-40-send-thread], Shutdown completed
2020-03-17 21:39:00,764 INFO kafka.controller.KafkaController: [Controller 40]: Broker failure callback for 41,40
2020-03-17 21:39:00,765 INFO kafka.controller.KafkaController: [Controller 40]: Removed ArrayBuffer() from list of shutting down brokers.
2020-03-17 21:39:00,765 INFO kafka.controller.PartitionStateMachine: [Partition state machine on Controller 40]: Invoking state change to OfflinePartition for partitions [__consumer_offsets,19],[__consumer_offsets,30],[__consumer_offsets,47],[__consumer_offsets,29],[__consumer_offsets,41],[session-location,0],[HetPetaAddTopic,0],[__consumer_offsets,39],[hetASUPfldTopic,0],[__consumer_offsets,10],[__consumer_offsets,17],[hetFltMsgTopic,0],[__consumer_offsets,14],[__consumer_offsets,40],[hetACDMTopic,0],[__consumer_offsets,18],[__consumer_offsets,0],[__consumer_offsets,26],[__consumer_offsets,24],[__consumer_offsets,33],[__consumer_offsets,20],[__consumer_offsets,21],[__consumer_offsets,3],[__consumer_offsets,5],[__consumer_offsets,22],[hetVideoTopic,0],[__consumer_offsets,12],[push-result-error,0],[__consumer_offsets,8],[__consumer_offsets,23],[__consumer_offsets,15],[__consumer_offsets,11],[hetAsupMsgTopic,0],[__consumer_offsets,48],[__consumer_offsets,13],[__consumer_offsets,49],[__consumer_offsets,6],[__consumer_offsets,28],[__consumer_offsets,4],[__consumer_offsets,37],[__consumer_offsets,31],[push-result,0],[__consumer_offsets,44],[hetTaskTopic,0],[__consumer_offsets,42],[__consumer_offsets,34],[__consumer_offsets,46],[btTaskTopic,0],[__consumer_offsets,25],[__consumer_offsets,27],[__consumer_offsets,45],[__consumer_offsets,32],[__consumer_offsets,43],[__consumer_offsets,36],[__consumer_offsets,35],[__consumer_offsets,7],[__consumer_offsets,38],[__consumer_offsets,9],[__consumer_offsets,1],[HetPetaTopic,0],[__consumer_offsets,2],[__consumer_offsets,16]
2020-03-17 21:39:00,767 ERROR state.change.logger: Controller 40 epoch 57125 aborted leader election for partition [__consumer_offsets,19] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 40 went through a soft failure and another controller was elected with epoch 57127.
2020-03-17 21:39:00,767 ERROR state.change.logger: Controller 40 epoch 57125 encountered error while electing leader for partition [__consumer_offsets,19] due to: aborted leader election for partition [__consumer_offsets,19] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 40 went through a soft failure and another controller was elected with epoch 57127..
2020-03-17 21:39:00,767 ERROR state.change.logger: Controller 40 epoch 57125 initiated state change for partition [__consumer_offsets,19] from OfflinePartition to OnlinePartition failed
kafka.common.StateChangeFailedException: encountered error while electing leader for partition [__consumer_offsets,19] due to: aborted leader election for partition [__consumer_offsets,19] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 40 went through a soft failure and another controller was elected with epoch 57127..
at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:380)
at kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:206)
at kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:120)
at kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:117)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
at kafka.controller.PartitionStateMachine.triggerOnlinePartitionStateChange(PartitionStateMachine.scala:117)
at kafka.controller.KafkaController.onBrokerFailure(KafkaController.scala:453)
at kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ReplicaStateMachine.scala:373)
at kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:359)
at kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1$$anonfun$apply$mcV$sp$1.apply(ReplicaStateMachine.scala:359)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply$mcV$sp(ReplicaStateMachine.scala:358)
at kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:357)
at kafka.controller.ReplicaStateMachine$BrokerChangeListener$$anonfun$handleChildChange$1.apply(ReplicaStateMachine.scala:357)
at kafka.utils.Utils$.inLock(Utils.scala:561)
at kafka.controller.ReplicaStateMachine$BrokerChangeListener.handleChildChange(ReplicaStateMachine.scala:356)
at org.I0Itec.zkclient.ZkClient$7.run(ZkClient.java:568)
at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
Caused by: kafka.common.StateChangeFailedException: aborted leader election for partition [__consumer_offsets,19] since the LeaderAndIsr path was already written by another controller. This probably means that the current controller 40 went through a soft failure and another controller was elected with epoch 57127.
at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:354)
... 23 more
HDC調(diào)試需求開發(fā)(15萬預(yù)算),能者速來!>>>
回顧一周社區(qū)熱門資訊
第【三十一】期:20190706-20190712
點擊相應(yīng)標(biāo)題,跳轉(zhuǎn)閱讀全文。 OpenWrt 18.06.4 發(fā)布,開源路由器項目
OpenWrt Project 是一個針對嵌入式設(shè)備的 Linux 操作系統(tǒng),它用于取代供應(yīng)商提供的各種無線路由器和非網(wǎng)絡(luò)設(shè)備固件。 Raspberry Pi 4 的性能如何?
Raspberry Pi 4 采用了博通 BCM2711 SoC,包含四個 1.5GHz Cortex A72 CPU 核心;內(nèi)存方面,新版本可選 1GB、2GB 和 4GB DDR4。而上一代 Pi 3 參數(shù)是四核 Cortex A53 CPU,內(nèi)存僅為 1GB,單看這兩方面,Pi 4 性能確實是有不小的提升。 TIOBE 7月排行:Perl 成為過分炒作 Python 的受害者?
Python 變得越來越受歡迎,但這也導(dǎo)致其他編程語言的流行度在不斷下降,其中就包括 Perl 和 R 語言。尤其是 Perl 受影響更嚴重?,F(xiàn)在 Perl 在 TIOBE 榜單中位于第 19 名(R 為第 20 名),這是有史以來的最低的一次。要知道,在 2005 年 Perl 曾坐過第三名的位置,而當(dāng)時其 Ratings 指數(shù)超過 10%。 Windows 1.11 登陸微軟商店,與《怪奇物語》聯(lián)動的 80 年代懷舊之旅
Debian 10 "buster" 正式發(fā)布
Canonical 的 GitHub 賬號遭入侵,Ubuntu 源碼目前安全
“我們可以確認,在 2019-07-06,GitHub 上有一個 Canonical 擁有的帳戶,其憑據(jù)被泄露并被用于創(chuàng)建存儲庫和 issue”,Ubuntu 安全團隊在一份聲明中表示,“Canonical 已從 GitHub 的 Canonical 組織中刪除了該帳戶,并且仍在調(diào)查違規(guī)的程度,此時沒有跡象表明任何源代碼或 PII 受到影響”。 Redis 6 將采用全新協(xié)議 RESP3,以提供客戶端緩存功能
原因主要有兩個,一是因為希望能為客戶端提供更多的語義化響應(yīng)(semantical replies),以開發(fā)使用舊協(xié)議難以實現(xiàn)的功能;另一個原因也是 antirez 認為最重要的一個,實現(xiàn) Client side caching(客戶端緩存)功能 。 這個功能十分常見,但 Redis 尚未提供。 GraalVM 19.1.0 發(fā)布,高性能跨語言虛擬機
GraalVM 是高性能跨語言虛擬機,用于運行 JavaScript、Python 3、Ruby、R、基于 JVM 的語言,如 Java、Scala、Kotlin 和基于 LLVM 的語言,如 C 和 C++。GraalVM 消除了編程語言之間的隔離,并支持共享運行時的互操作性。它可以獨立運行,也可以在 OpenJDK、Node、Oracle 或者 MySQL 上運行。 實時 Git,在版本控制之前控制源碼
繼 Python 解釋器移植到 Firefox 后,Mozilla 現(xiàn)在想支持 Julia 和 R
Mozilla 工程師過去幾年一直致力于將數(shù)據(jù)科學(xué)工具移植到瀏覽器,成為 Iodide 項目的一部分。之前 Mozilla 工程師做過一個 Pyodide 項目,就是使用 WebAssembly 將 Python 解釋器移植到瀏覽器中運行。Mozilla 工程師說:“Pyodide 項目,已經(jīng)證明了在 WebAssembly 中運行語言翻譯的可用性”。 IBM 完成對 Red Hat 的收購
IBM 還強調(diào)致力于保持 Red Hat 的獨立性、中立性、文化與行業(yè)合作伙伴關(guān)系,為客戶和合作伙伴提供自由、選擇和靈活性,而 Red Hat 對開源的堅定承諾將保持不變,收購之后 Red Hat 將作為 IBM 混合云團隊獨立運營。通過此次收購,IBM 將繼續(xù)致力于 Red Hat 的開放式治理、開源貢獻、參與開源社區(qū)和開發(fā)模式,以及促進其廣泛的開發(fā)人員生態(tài)系統(tǒng)。此外,通過專利承諾、GPL 合作承諾、OIN 和 LOT 網(wǎng)絡(luò)等工作,IBM 和 Red Hat 將繼續(xù)致力于持續(xù)的開源自由。 微軟將被允許加入私有的 Linux-Distro 郵件列表
Openwall 創(chuàng)始人 Alexander "Solar Designer" Peslyak 對微軟的請求進行了回應(yīng),他表示根據(jù)目前的政策和先例,沒有理由拒絕微軟的申請。他將會繼續(xù)理清一些細節(jié)然后再決定微軟的加入。 里程碑,RISC-V 基金會批準 RISC-V 基礎(chǔ)指令集架構(gòu)與特權(quán)架構(gòu)規(guī)范
開源指令集 RISC-V 相比其它指令集可以自由地用于任何目的,允許任何人設(shè)計、制造和銷售 RISC-V 芯片和軟件,因為這些特點,其相繼吸引來 IBM、NXP、西部數(shù)據(jù)、英偉達、高通、三星、谷歌、華為與特斯拉等 100 多家科技公司加入其陣營,行業(yè)也不斷在圍繞它構(gòu)建生態(tài)系統(tǒng),對 ARM 等競爭對手造成了巨大的沖擊。國內(nèi)去年也成立了 RISC-V 中國聯(lián)盟。 GitHub 正在移除與色情應(yīng)用 DeepNude 相關(guān)的倉庫
最強開源微服務(wù)框架,全網(wǎng)獨家整理
微服務(wù)架構(gòu) = 80% 的 SOA 服務(wù)架構(gòu)思想 + 100% 的組件化架構(gòu)思想 + 80% 的領(lǐng)域建模思想 2500 萬 Android 設(shè)備被“Agent Smith”惡意軟件感染
該惡意軟件通常偽裝成 Google Updater、Google Update for U 或 com.google.vending 等實用工具 ,并對用戶隱藏其圖標(biāo)。接下來,惡意軟件檢查目標(biāo)手機上的應(yīng)用程序,然后獲取帶有惡意廣告模塊的“補丁”識別 APK 的更新。為了完成更新安裝過程,惡意軟件利用 Janus 漏洞,該漏洞可以讓其繞過 Android 的 APK 完整性檢查。Janus 是一個可追溯到 2017 年的 Android 漏洞。 微軟宣布開源量子開發(fā)工具包
微軟量子團隊認為他們的使命是是開發(fā)和部署世界上最可伸縮、最安全的量子計算系統(tǒng),并支持一個由領(lǐng)域?qū)<?、開發(fā)人員和研究人員組成的豐富生態(tài)系統(tǒng),以解決當(dāng)今最具挑戰(zhàn)性的問題。 Raspberry Pi 4 確認用于供電的 USB-C 接口存在設(shè)計缺陷
Pi 4 的 USB-C 充電接口不能兼容所有本應(yīng)支持的 USB Type-C 線,也就意味著用戶可能無法使用手頭的 Type-C 數(shù)據(jù)線給 Raspberry Pi 4 充電。
歡迎關(guān)注問答版塊【主題廣場】的 【一周熱點】主題 。