Tick Rate vs Performance of RTOS

I will be running an Arm Cortex M3 at 156 Mhz. I have a need for an accurate timer with a resolution of 250 usecs both for implementing delays and for keeping time, as well as perhaps scheduling some action to be performed. I’m trying to decide whether to increase the OS tick timer to this resolution, or to use another hardware timer at my disposal. I have other hardware timers available which run at 25 Mhz that can cause periodic interrupts and I can/could build a utility outside the OS to implement these functions. In fact, this code is all available from an earlier non-RTOS design. The issues I’m concerned about as I think this over are: 1) How does increasing the tick timer from something like 200 Hz to 4 Khz effect the overall throughput of the processor? Does this vary with CPU frequency? I’m guessing that it’s less important on a processor running at 156 Mhz than one running at 25 Mhz. Is this assumption valid? 2) The OS Tick Timer is usually placed at a low priority, which means it could be quite jittery depending on the interrupt loading of higher priority interrupts and code which masks interrupts. To get an accurate delay timer/scheduling/timer I may want to consider raising the priority of the scheduler. Are there problems with having the scheduler be the highest priority interrupt instead of the lowest? I suppose this question depends on my real-time requirements for my other interrupts. Has anybody had experience with this kind of a question? Thanks for any feedback. Todd

Tick Rate vs Performance of RTOS

First, the affect of high rick rates is a function of how much time a task switch takes vs how often you are going to force it to happen via a tick. A faster processor makes executes more instructions per unit time than the slower one, so a tick will us up a lower percentage of the time. Raising the priority of the OS Tick isn’t going to help the jitter, if it is caused by long interrupt execution time, as interrupt delay ALL task level processing, so doing the tick interrupt sooner in the interrupt nest isn’t going to get you to the task waiting for the time mark any faster, the other interrupt still need to be executed first. Note that if interrupt routines are taking significant time, that is a sign that there may not be an appropriate partition, with a RTOS, the interrupts should be doing the minimum needed to handle the interrupt, and any longer operations deferred to a task. My first feeling is that if you really have a need for a lot of 250usec resolution timing, something isn’t being specified right. I don’t find much call for the need to be that accurate in specifying time out values. Occasionally  I will have a need to a task to have an occational short delay like this, in which case I will not set the tick rate that fast (in part because tick based delays tend to have a jitter of 1 tick in them as you don’t know when in the current tick you are) so if i need an occational precise short delay, I will use a timer interrupt as a one shot timer, and wait on a semaphore that the timer interrupt gives.

Tick Rate vs Performance of RTOS

Just to add to richard_damon’s reply:
1) How does increasing the tick timer from something like 200 Hz to 4 Khz effect the overall throughput of the processor? Does this vary with CPU frequency? I’m guessing that it’s less important on a processor running at 156 Mhz than one running at 25 Mhz. Is this assumption valid?
The frequency at which the tick happens is fixed, no matter how fast the CPU is.  The time taken to execute the tick will get longer the lower the CPU frequency, so the lower the frequency of the CPU, the higher the percentage of CPU time will be spent processing ticks (and every other line of code will take longer to execute too). A 1KHz tick is already very fast.  Most production systems will use a much slower tick frequency, and I would very rarely recommend going above 1KHz.  If you want 4KHz then use a timer peripheral instead.  If you want little jitter in the interrupt, then set its priority above configMAX_SYSCALL_INTERRUPT_PRIORITY (i.e. a lower numerical value on a Cortex-M3) and don’t call any FreeRTOS API function.  That will give you a jitter equal to the CPU’s own jitter time caused by the difference between entering an interrupt from a task, and entering an interrupt from another interrupt (tail chaining and late arrival features of the Cortex-M3 allow you to get into an interrupt faster at the expense of jitter).  On a Cortex-M3 running at 50MHz, this will be a jitter of about 70ns (and measuring it shows it to be almost exactly equal the this theoretical value). With regards to changing the priority of the kernel interrupt.  My advice is simply, don’t do that.  It will break the interrupt nesting model of the Cortex-M3, and make your application extremely complex because you will have a set of lower priority tasks that cannot call the FreeRTOS API too.  It also will not gain you anything anyway because you will end up calling more context switch functions that you need (multiple obsolete calls in a single interrupt nest). Regards.

Tick Rate vs Performance of RTOS

Actually – scrub that last bit about the multiple obsolete calls – the Cortex-M3 has different interrupts for the tick interrupt and context switching, maybe it wouldn’t be so bad actually on Cortex-M devices.  My confusion, I use lots of different cores ;o) Regards.

Tick Rate vs Performance of RTOS

Your input was very helpful. After reading your advice and thinking it over and talking to our other developers, I decided to leave the tick at a lower rate like 200 Hz, and use the hardware timers for those few cases we need a shorter, precise delay or need to measure something. We had no reason to schedule anything on such a fine time scale. After thinking it over, I realized that. And we’ll leave the OS tick timer at the lower priority as it is. Thanks.