Heap, stack, printf and malloc failure

Sorry for the somewhat vague subject, but I’m not really sure where our problems generate from. Probably a lack of understanding of the way FreeRTOS allocates memory. We’re rewriting a project based on an Atxmega256 without any OS where we basically wrote everything ourselves to be based on FreeRTOS. We’re currently having some troubles with memory allocation. Every now and then the vApplicationMallocFailedHook is called, and we’re not really sure why, we should have alot of free memory. Maybe someone with a better understanding of FreeRTOS can shed som light on this. First of all, we’re running the code on an ATSAM4E16E . The project uses FreeRTOS 10 with FreeRTOS+TCP and FreeRTOS+FAT. The current code samples the internal thermometer from the ATSAM every 100 ms and writes the sample to a text-file on an SD-card. A parrallel task reads from the same file (implemented with a binary semaphore) and uploads the data to a server via Ethernet. A third task is running a small web server. We based the web server code on the sample code available with FreeRTOS+TCP but rewrote it to serve pages located in predefined strings instead of reading files from the flash. The idea is that the production code will also feature the web-server, and we don’t want to write html-files to an external flash memory in production. This of course leads to alot of strings defined in memory, but the pages served are very minimal compared to the amount of SRAM available. So, first of all, the ATSAM has 128 Kbytes of Embedded SRAM. Since almost all code is run inside FreeRTOS tasks we’ve tried to alllocate as much memory as possible to the FreeRTOS heap. However, the linker complains that we’ve run out of ram during build if we set configTOTALHEAPSIZE to more than 96×1024 (96 kB). It doesn’t really ad up that the code outside the FreeRTOS tasks would require that much memory. Have we missed something? Shouldn’t we’ve be able to set it to close to 128×1024? Also, we’ve assigned a total of 15 kB of stack stack (4xthe sum of all stack depth given to the tasks in xTaskCreate). But, when we poll xPortGetFreeHeapSize() it’s typically at 40 kB. Shouldnt it be around 80 kB? xPortGetMinimumEverFreeHeapSize() is often down to about 20 kB. Where’s all the heap going? Shouldnt the sum of all available stack be the maximum amount of memory consumed by the tasks? We’re using heap4. For the individual tasks, the usStackHighWaterMark shows reasonable values with quite low memory usage. So, to the real problem. Except for the desire to understand the memory allocation better we’re having some crashes. Every now and then the vApplicationMallocFailedHook is called. It seems to occur more often when the system is stressed (reload the web pages fast while doing alot of other stuff), but sometimes it will just occur after a while of normal operation. At first, the malloc failures were quite frequent, but we recently realized the IDLE task didn’t have time to run often enough and therefore didn’t have time to free memory from other tasks. We’ve since decreased the priority of almost all tasks to the lowest priority so that IDLE can run more. This decreased the problem but didn’t make it go away. If a task is running out of memory, shouldn’t the vApplicationStackOverflowHook be called for that task? The stack overflow detection is working and has been called every now and then during development when we’ve allocated to little stack to a task. Could the problems be related to printf and snprintf? We’ve implemented our own logger so we can call Loggerinfo(format, ….) and Loggerdebug(format, ….) in the code. The calls are defined as a function that adds FUNCTION, FILE and LINE and RTC time to the debug string via vprintf. It’s used quite extensively in the code. The other suspect is snprintf which we use alot for building the HTML pages. The html pages are defined as strings, e.g. “