ff_fflush() – does it exist?
I’m working on an application where we write debugging information to a log file, to aid in tracking down bugs and system crashes during testing. I’m having a problem where log entries sometimes don’t make it into the file. This seems to be caused by the filesystem caching the file contents rather than writing directly to disk. According to the documentation, the caches are flushed when the file is closed with ff_fclose(), but I can’t guarantee that files will be closed cleanly in the event of a system crash, meaning that the most useful log entries (those written immediately before the crash) will invariably be lost.
The usual solution to this problem is to call fflush() (or, in FreeRTOS’s case, fffflush()) immediately after writing, to force the buffers to be flushed. However, as of the latest (160919) release, this function seems to be missing. It’s defined in the ffstdio.h header, but there is no implementation, nor is it documented on the website. Are there any plans to add this function at some point? Or is there another way to do it?
A workaround would be to open the file once for each log entry, then close it immediately after writing, but that seems very inefficient, especially given the amount of stuff going on in ff_fopen(). There are likely to be a lot of short messages written to the log, so these inefficiencies would quickly stack up.
ff_fflush() – does it exist?
In any system, the safest way to would be to
open
/write
/close
the file for every logging message.
~~~
fd = fopen(fname, “a”)
if (fd != NULL) {
fprintf(fd, msg);
fclose (fd);
}
~~~
That is not a work-around, that is the way to go, I’m afraid.
There is a way to force flushing, while keeping a file handle open, which is:
~~~
FFErrort FFFlushCache( FFIOManager_t *pxIOManager );
~~~
But that will not update the directory entry of the file. It will only flush the caches. So in case your application crashes, you will still not see the latest logging text. The data sectors are stored on disk, but the directory doesn’t show the actual file length.
When using the optimisation ffconfigOPTIMISE_UNALIGNED_ACCESS
, the file cache will not be flushed to disk. That will only happen when moving to the next 512-byte sector, or when closing the file.
I know that open
/write
/close
is not very efficient, but I’m afraid that it is the only safe way of logging in a less stable system.
Maybe you can ff_fclose()
less frequent? For instance at the end of a main loop:
~~~
for( ;; )
{
// Do your work here that may produce logging
if( fdlogging != NULL )
{
fffclose( fdlogging );
fdlogging = NULL;
}
}
~~~
ff_fflush() – does it exist?
Thanks for the info. The logging is actually running in its own task, so I might look into opening/closing the file periodically on a timer, rather than after every message. Then it would just be a case of setting the time period as a trade-off between efficiency and log reliability.