FreeRTOS+FAT FIFO

Hi, we’re using FreeRTOS+FAT for storing sensor data. One FreeRTOS task is collecting data and writing it to a file, while another task is reading from it and uploading data to a server. The data is chunked, and we’re currently writing one line in the file for each chunk. We’re keeping a separate read pointer to the file to be able to switch between read and write mode for the two processes (The often run parralell for “live streaming”). We’re basically using the file as a large FIFO with a binary semaphore. The above solution works well, but eventuelly the file will become to large as no data is ever removed. Also, if the MCU should restart for some reason (power outage, WDT-reset etc.), it will upload all data ever collected. We could ofcourse empty the file each time the read pointer reaches EOF, but that would not solve the problem where the MCU restarts when only half the data is uploaded. Everything would then be re-uploaded. So, we need some way of removing data that has allready been uploaded. The ideal solution would be to “truncate” with an offset. That is, if the file is N bytes large, and we upload the first X bytes, it would be perfect to truncate it to (N-X) bytes with X bytes offset. I don’t see any way this is possible with the current API. Another solution would be to somehow prepend data to the file, and thus use the regular ff_truncate function for removing uploaded data from the end. We’ve initially tried using one file per datachunk, but the FS became extremely slow after a few thousand files. Does anyone have a good solution to this problem? Maybe there’s a clever/obvious solution I haven’t though of. Best regards, Fredrik

FreeRTOS+FAT FIFO

Hi Fredrik, what medium are you using for FreeRTOS+FAT? Is it an SD-card? As for sensor data that is collected at a constant speed, I would not really think of using FAT. I’d rather use some SPI/NAND memory chip. But if you insist on using FAT, I would propose to create a big fixed-size binary file, which has a 512-byte header which stores a read and a write pointer ( among other things ). When a data record is written, the write point advances. When a pointer reaches the maximum size of the file, it wraps and moves to sector 1 at offset 512 ( sector 0 will be for the storing meta data ). When the file is not present at start-up, create it and write zero’s until it is all filled. After that, you will find that access is very fast, just because all sectors have been pre-allocated, and no FAT changes are needed.
we’re currently writing one line in the file for each chunk
It would be handy if you can give the logging records a fixed size. If not, you’d have to look for tokens like LF/CR, which makes it slower.
We’re basically using the file as a large FIFO with a binary semaphore
In the solution that I’m proposing, the file will not become a FIFO, but a circular buffer 🙂 Note that FreeRTOS+FAT already has a semaphore to protect against concurrent access. But… as long as you have a file opened with WRITE access, READ access will not be allowed. But you can both read and write, when using the same file handle ( while protecting the handle with a semaphore 🙂 ). The data written to a file will be flushed immediately when calling the +FAT flush routine FF_FlushCache(). This is in contrast when creating a file with a growing length: the directory entry is not updated when calling FF_FlushCache(). Flushing data in +FAT means: emptying and freeing the sector cache buffers, thus it doesn’t mean that the directory entry is updated and flushed to disk. That only happens when the output file is being closed.

FreeRTOS+FAT FIFO

“Files” do not support things like inserting/deleting at the begining (or middle) because that would require effectively reading in the whole file, modifying it, and writing it out, due to the block nature of the file system. Hein has pointed out one way of changing to use it as a circular buffer. Another option would be to automatically roll from one file to another when you reach a certain size, and then delete the old files as they are done being used (now read and write might be different files at times).

FreeRTOS+FAT FIFO

Richard’s idea is also good, and it allows to open one file in read mode, and another file in write mode independently. Only one remark: when there is a power fail or an exception, the length of the file being written won’t get updated in the directory entry. That is only done when the write handle is closed in FF_Close(). When it is a new file, then the length will appear as zero until it is closed. When closing a file, all sector buffers will be flushed to disk. A file can be opened in in update mode ( allowing read & write ) with for instance FF_Open(pcName, "+"). With "a+", the file will be created if it does not exist yet. Please see FF_GetModeBits() in ff_file.c.

FreeRTOS+FAT FIFO

Hein and Richard, thank you for your suggestions. Hein, yes, we’re currently using a SD-card, but will switch to a eMMC as soon as our new board arrives in about two weeks. We’ve used SD-cards for a few years, but wanted something soldered to the board for this generation of boards. eMMC seemed like a nice option. We’ll see how it works out. Hein, the sulution you’re proposing is the exact one we’ve been using for the last few years. We’ve used raw acces to a SD-card without a FS storing read/write pointers that wraps to create a circular buffer. It has worked very well, but we figured there might be a simpler setup now that we have a FS and all 🙂 Also, we try to use open source/well tested code as much as possible instead of inventing the weel over and over as we have done in the past 🙂 We don’t really need to use FAT to store the data, it just seem convenient for debugging purposes. In the past, when something has gone wrong with the storage, we had to upload the whole storage and go through the binary format. We figured a file would be more convenient as we’ve have added a small web server to the code so that we can view the FS and the files on it. But, maybe going back to that approach with a circular buffer, but in a file, is a good solution. So, regarding the fixed size file. Is the only added benefit write speeds? Or would a crasch with a file a that grows mean that data is lost even though we flush? Best regards, Fredrik

FreeRTOS+FAT FIFO

Hi Fredrik,
We’ve used SD-cards for a few years, but wanted something soldered to the board
A wise decision. The contacts of an SD-card are vulnerable for corrosion, spiders, and oily substances.
We’ve used raw access to a SD-card without a FS storing read/write pointers that wraps to create a circular buffer
Well: it is useful to have a file system. For instance it is nice to store firmware updates ( HEX files ) and create ( human readable ) configuration files or web pages. Or statistics, and debugging output 🙂 The access speed should be about the same if you compare a raw access with a fixed-size binary file.
So, regarding the fixed size file. Is the only added benefit write speeds?
When the file has a fixed size, then the directory entry will never have to change any more. And also, it will be considerably faster than a growing file, because ff_fwrite() doesn’t have to allocate new sectors. Especially if your information blocks are sector-aligned ( a multiple of 512 bytes ), it will be the fastest solution.
Or would a crash with a file a that grows mean that data is lost even though we flush?
When you call FF_FlushCache(), all sector buffers will be written to disk. So your data have been stored. But that will not update directory entries. When you want to use a growing file: just call FF_Close(), and re-open it in append mode the next time you want to write. FF_Close() will also call FF_FlushCache() when it is ready. Here is another thread about this subject.

FreeRTOS+FAT FIFO

Thanks again Hein. We’ll go with a growing file and close/open after each write of a complete chunk (will happen at most 1/second during intense periods). Once the file has grown over a few GB well try to delete it and start over by assert all data is uploaded etc. The next challenge will be how to store the information on what data is upplaoded and acked (with CRCs to server). That read-pointer is currently only hel in memory which makes the system voulnareable to resets (WDT, power outage etc.). The obvious solution would be to write it to the file (at the beginning as you suggest) or to a separate file. The problem I see is that it will be written quite often (sometimes a few thousand times/hour) which will wear the sectors of the flash where it’s stored out. Best regards, Fredrik

FreeRTOS+FAT FIFO

One advantage of rolling the file over is that when the data in one file is uploaded, that file can be deleted so you don’t try to upload it again. That means that you will only duplicate upload one file worth of data. That might argue for making the file size smaller for when you roll to a new file.

FreeRTOS+FAT FIFO

Richard, true, thanks.