2025年05月26日开始无法接收成员退群事件
-
https://github.com/mamoe/mirai/issues/2882
Mirai版本:2.14.0-RC
mirai-api-http版本:v2.9.1
MCL Addon版本:v2.0.2
iTXTech MCL Version: 2.0.1-b5303b5
protocol: IPAD
heartbeatStrategy: STAT_HB
无签名,通过密码自动登录,无验证码一直通过WebSocket API正常使用,业务端有一项功能是末位淘汰,逻辑是由新人入群事件触发检查,群员人数超指定人数则踢一位最长时间未发言成员,设定为1995人,稳定运行多年,最近几天发现群人数越来越少了,一直到1967人了都仍然在触发末位淘汰,调试触发
memberList
接口获取群员列表以更新业务逻辑的成员缓存,由mirai返回的群员列表依然为1995个成员,于是尝试重启mirai,重启后mirai返回的群员列表为1967个成员。
倒查日志发现,无论是业务端的log,还是mirai的log,均显示最后一次接收到退群事件(MemberLeaveEvent.Quit)在2025-05-26 01:39:46,在此以后再也没有接收到任何退群事件。
我运行的bot管理着很多的两、三千人大群,所以可以认为是2025-05-26开始不再能接收到退群事件,目前个人推测可能是Mirai也对群员列表有缓存,Mirai核心收不到退群事件,所以Mirai在重启前返回的群员列表中没有删除退群成员。
与之相对的,在2025-05-26开始,mirai log开始频繁出现下面的报错,在测试群中测试了退群也一样会触发下面的报错:2025-06-14 20:39:01 E/Bot.[QQ号]: Exception on parsing packet. java.lang.IllegalStateException: Exception in net.mamoe.mirai.internal.network.notice.group.GroupOrMemberListNoticeProcessor@7956774f while processing packet PbMsgInfo. at net.mamoe.mirai.internal.network.components.NoticeProcessorPipelineImpl.handleExceptionInProcess(NoticeProcessorPipeline.kt:105) at net.mamoe.mirai.internal.network.components.NoticeProcessorPipelineImpl.handleExceptionInProcess(NoticeProcessorPipeline.kt:80) at net.mamoe.mirai.internal.pipeline.AbstractProcessorPipeline.process$suspendImpl(ProcessorPipeline.kt:289) at net.mamoe.mirai.internal.pipeline.AbstractProcessorPipeline.process(ProcessorPipeline.kt) at net.mamoe.mirai.internal.pipeline.AbstractProcessorPipeline.process$suspendImpl(ProcessorPipeline.kt:275) at net.mamoe.mirai.internal.pipeline.AbstractProcessorPipeline.process(ProcessorPipeline.kt) at net.mamoe.mirai.internal.network.protocol.packet.chat.receive.OnlinePushPbPushTransMsg.decode(OnlinePush.PbPushTransMsg.kt:45) at net.mamoe.mirai.internal.network.components.PacketCodecImpl.processBody(PacketCodec.kt:492) at net.mamoe.mirai.internal.network.handler.CommonNetworkHandler$PacketDecodePipeline.processBody(CommonNetworkHandler.kt:157) at net.mamoe.mirai.internal.network.handler.CommonNetworkHandler$PacketDecodePipeline.access$processBody(CommonNetworkHandler.kt:102) at net.mamoe.mirai.internal.network.handler.CommonNetworkHandler$PacketDecodePipeline$1$3$1.invokeSuspend(CommonNetworkHandler.kt:126) at net.mamoe.mirai.internal.network.handler.CommonNetworkHandler$PacketDecodePipeline$1$3$1.invoke(CommonNetworkHandler.kt) at net.mamoe.mirai.internal.network.handler.CommonNetworkHandler$PacketDecodePipeline$1$3$1.invoke(CommonNetworkHandler.kt) at kotlinx.coroutines.intrinsics.UndispatchedKt.startCoroutineUndispatched(Undispatched.kt:55) at kotlinx.coroutines.CoroutineStart.invoke(CoroutineStart.kt:112) at kotlinx.coroutines.AbstractCoroutine.start(AbstractCoroutine.kt:126) at kotlinx.coroutines.BuildersKt__Builders_commonKt.launch(Builders.common.kt:56) at kotlinx.coroutines.BuildersKt.launch(Unknown Source) at kotlinx.coroutines.BuildersKt__Builders_commonKt.launch$default(Builders.common.kt:47) at kotlinx.coroutines.BuildersKt.launch$default(Unknown Source) at net.mamoe.mirai.internal.network.handler.CommonNetworkHandler$PacketDecodePipeline$1.invokeSuspend(CommonNetworkHandler.kt:126) at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33) at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106) at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.base/java.lang.Thread.run(Thread.java:833) Caused by: java.io.EOFException: Premature end of stream: expected 4 bytes at net.mamoe.mirai.internal.deps.io.ktor.utils.io.core.StringsKt.prematureEndOfStream(Strings.kt:453) at net.mamoe.mirai.internal.deps.io.ktor.utils.io.core.InputPrimitivesKt.readIntFallback(InputPrimitives.kt:86) at net.mamoe.mirai.internal.deps.io.ktor.utils.io.core.InputPrimitivesKt.readInt(InputPrimitives.kt:17) at net.mamoe.mirai.internal.network.notice.group.GroupOrMemberListNoticeProcessor.processImpl(GroupOrMemberListNoticeProcessor.kt:673) at net.mamoe.mirai.internal.network.components.MixedNoticeProcessor.processImpl(NoticeProcessorPipeline.kt:166) at net.mamoe.mirai.internal.network.components.SimpleNoticeProcessor.process(NoticeProcessorPipeline.kt:147) at net.mamoe.mirai.internal.network.components.SimpleNoticeProcessor.process(NoticeProcessorPipeline.kt:141) at net.mamoe.mirai.internal.pipeline.AbstractProcessorPipeline.process$suspendImpl(ProcessorPipeline.kt:287) ... 27 more
-
尝试了用
latestMemberList
接口替换memberList
接口以弥补漏事件带来的群员数量偏移,结果发现latestMemberList
接口返回的数据有误(最后发言时间全部是加群时间 #2883 ),目前是把业务逻辑中定期刷新成员列表的部分改成了先调用latestMemberList
,忽略返回的数据不做处理(但愿能刷新Mirai那边的缓存),等待其完成后调用原本使用的memberList
接口来刷新业务逻辑中的成员列表缓存,希望能用吧。 -
在群员列表已因退群出现虚高后触发先前的逻辑(先调用
latestMemberList
后调用memberList
)后,人数依然为虚高人数,并不是实际人数,有可能是latestMemberList
在刷新群员列表时未更新mirai内部缓存。
看来还是得改,目前打算把半异步改同步,做个Adapter依照latestMemberList
数据填入memberList
的时间传回原逻辑。