-
Notifications
You must be signed in to change notification settings - Fork 3k
Robustness fixes for netstack #46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Robustness fixes for netstack #46
Conversation
I was doing some debugging that had me looking at the disassembly of lpc_rx_queue() from within the debugger. I was looking for the call to pbuf_alloc() that we see in the following code snippet: p = pbuf_alloc(PBUF_RAW, (u16_t) EMAC_ETH_MAX_FLEN, PBUF_RAM); if (p == NULL) { LWIP_DEBUGF(UDP_LPC_EMAC | LWIP_DBG_TRACE, ("lpc_rx_queue: could not allocate RX pbuf (free desc=%d)\n", lpc_enetif->rx_free_descs)); return queued; } /* pbufs allocated from the RAM pool should be non-chained. */ LWIP_ASSERT("lpc_rx_queue: pbuf is not contiguous (chained)", pbuf_clen(p) <= 1); When I was looking through the disassembly for this code I noticed a call to pbuf_clen() in the actual machine code. => 0x0000bab0 <+24>: bl 0x44c0 <pbuf_clen> 0x0000bab4 <+28>: ldr r3, [r4, #112] ; 0x70 0x0000bab6 <+30>: ldrh.w r12, [r5, #10] 0x0000baba <+34>: add.w r2, r3, #9 0x0000babe <+38>: add.w r0, r12, #4294967295 ; 0xffffffff The only call to pbuf_clean made from this function is made from within the LWIP_ASSERT. When I looked more closely at how this macro was defined, I saw that the mbed version of the stack had disabled the LWIP_PLATFORM_ASSERT macro when LWIP_DEBUG was false which means that no action will be taken if the assert is false but it still allows the LWIP_ASSERT macro to potentially evaluate the assert expression. Defining the LWIP_NOASSERT macro will fully disable the LWIP_ASSERT macro. I saw one of my TCP/IP samples shrink about 0.5K when I made this change.
After making my previous commit to completely disable LWIP_ASSERT macro invocations, I ended up with a warning in pbuf.c where an err variable was set but only checked for success in an assert. I added a "(void)err;" reference to silence this warning.
This option actually enables the use of the lwip_sys_mutex for protecting concurrent access to such important lwIP resources as: select_cb_list (this is the one which orig flagged problem) sockets array mem stats (if enabled) heap (if LWIP_ALLOW_MEM_FREE_FROM_OTHER_CONTEXT was non-zero) memp pool allocs/frees netif->loop_last pbuf linked list pbuf reference counts ... I first noticed this issue when I hit a crash while slamming the net stack with a large number of TCP packets (I was actually sending 1k data buffers from the TCPEchoServer mbed sample.) It crashed in the last line of this code snippet from event_callback: for (scb = select_cb_list; scb != NULL; scb = scb->next) { if (scb->sem_signalled == 0) { It was crashing because scb had an invalid address so it generated a bus fault. I figured that memory was either corrupted or there was some kind of concurrency issue. In trying to determine which, I wanted to walk through the select_cb_list linked list and see where it was corrupted: (gdb) p scb $1 = (struct lwip_select_cb *) 0x85100080 (gdb) p select_cb_list $2 = (struct lwip_select_cb *) 0x0 That was interesting, the head of the linked list was now NULL but it must have had a non-NULL value when this loop started running or we would have never gotten to the point where we hit this crash. This was starting to look like a concurrency issue since the linked list was modified out from underneath this thread. Looking through the source code for this function, I saw use of macros like SYS_ARCH_PROTECT and SYS_ARCH_UNPROTECT which looked like they should be providing the thead synchronization. I disassembled the event_callback() function in the debugger and saw no signs of the usage of synchronizition APIs that I expected. A search through the code for the definition of these SYS_ARCH_UN/PROTECT macros led me to discovering that they were actually ignored unless an implementation defined them itself (the mbed version doesn't do so) or the SYS_LIGHTWEIGHT_PROT macro is set to non-zero (the mbed version didn't do this either). Flipping the SYS_LIGHTWEIGHT_PROT macro on in lwipopts.h fixed the crash I kept hitting, increased the size of the code a bit, and unfortunately slows things down a bit since it now actually serializes access to these data structures by making calls to the RTOS sync APIs.
Previously the packet_rx() function would wait on the RxSem and when signalled it would process all available inbound packets. This used to cause no problem but once the thread synchronization was turned on via SYS_LIGHTWEIGHT_PROT, the semaphore actually started to overflow its maximum token count of 65535. This caused the mbed_die() flashing LEDs of death. The old code was really breaking the producer/consumer pattern that I typically see with a semaphore since the consumer was written to consume more than 1 produced object per semaphore wait. Before the thread synchronization was enabled, the packet_rx() thread could use a single time slice to process all of these packets and then loop back around a few more times to decrement the semaphore count while skipping the packet processing since it had all been done. Now the packet processing code would cause the thread to give up its time slice as it hit newly enabled critical sections. In the end it was possible for the code to leak 2 semaphore signals for every 1 by which the thread was awaken. After about 10 seconds of load, this would cause a leak of 65535 signals. NOTE: Two potential issues with this change: 1) The LPC_EMAC->RxConsumeIndex != LPC_EMAC->RxProduceIndex check was removed from packet_rx(). I believe that this is Ok since the same condition is later checked in lpc_low_level_input() anyway so it won't now try to process more packets than what exist. 2) What if ENET_IRQHandler(void) ends up not signalling the RxSem for every packet received? When would that happen? I could see it happening if the ethernet hardware would try to pend more than 1 interrupt when the priority was too elevated to process the pending requests. Putting the consumer loop back in packet_rx() and using a Signal instead of a Semaphore might be a better solution?
I recently pulled a NXP crash fix for their ethernet driver which will requeue a pbuf to the ethernet driver rather than sending it to the lwip stack if it can't allocate a new pbuf to keep the ethernet hardware primed with available packet buffers. While recently reviewing this code I noticed that the full size of the pbuf wasn't used on this re-queueing operation but the size of the last received packet. I now reset the pbuf size back to its originally allocated size before doing this requeue operation.
This reverts commit acb3578. It turns out that this commit actually causes problems if an ethernet interrupt is dropped because a higher privilege task is running, such as LocalFileSystem accesses. If this happens, the semaphore count isn't incremented enough times and the packet_rx() thread will fall behind and end up running as though it had only one ethernet receive buffer. This causes even more lost packets. I plan to fix this by switching the semaphore to be a signal so that the syncronization object is more boolean. It simply indicates if an interrupt has arrived since the last time packet_rx() was awaken to process inbound packets.
I now use a signal to communicate when a packet has been received by the ethernet hardware and should be processed by the packet_rx thread. Previously the change to make the lwIP stack thread safe introduced enough delay in packet_rx that the semaphore count could lag behind the processed packets and overflow its maximum token count. Now the ISR uses the signal to indicate that >= 1 packet has been received since the last time packet_rx() was awaken. Previously the ethernet driver used generic sys_arch* APIs exposed from lwIP to manipulate the semaphores. I now call CMSIS RTOS APIs directly when using the signals. I think this is acceptable since that same driver source file already contains similar os* calls that talk directly to the RTOS.
The leading whitespace preceeding the fields in the lpc_enetdata structure definition were originally a tab and I used 4 spaces when I added RxThread.
I don't understand everything that happened here, but the changes look OK to me overall. Very nice investigation, as usual! Before I forget, what's the problem with GCC_ARM and _sbrk? |
Thanks for reviewing and pulling these changes.
I have never properly investigated this issue. I just drop in the _sbrk() implementation from gcc4mbed into main.cpp for a test like NET1 and its runs whereas it would hang at startup otherwise. I suspect it is related to this issue from the forums but I haven't verified this under the debugger. I will actually debug this issue later today and get back to you with a new issue or pull request better describing the problem. |
I have created pull request #48 which contains a fix for the _sbrk() issue. |
Adding a proper exit return code on a toolchain failure
* Use default local address, if request is from multicast address If response to multicast request is sent, local address must be set to default. * Fix unit tests Fixed socket_listen stub.
Removed a set NVIC priority call, added code to clear benign us timer…
Modify Thermometer to keep temperature in 39.6 <= x <= 43.0
Updating mbed-os to mbed-os-5.3.3
It turns out that my previous concern about my semaphore token count overflow was well founded. Pablo was running a HTTP server with those changes and it performed horribly. It turned out that his server was hosting content from the LocalFileSystem. The semihost requests to the mbed interface chip for these LocalFileSystem calls halt the CPU and cause it to miss some ethernet interrupts. With my previous fix attempt this led to the semaphore not being signaled enough times and in the end the ethernet driver started to perform as though it only had once packet receive buffer. The final result was increased packet loss. I have removed the semaphore from the receive path in the ethernet driver and switched to a signal. I also pulled the loop back into packet_rx() which will loop through all available packets when it is awaken by the signal.
The other changes include:
Please refer to the commit descriptions for more details on each of those changes.
Pablo Gindel has buddy tested these changes with the TCP/UDP code that he is developing. I have also tested these networking changes with a few samples that I have building with gcc4mbed and also the NET_1 test built and tested with the Python scripts in the mbed tree itself. I would run more of the tests from the mbed build system but they don't seem to work with GCC_ARM due to what I believe is a _sbrk() issue.
Thanks for considering these changes!