summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorFrederic Weisbecker <fweisbec@gmail.com>2013-02-20 16:15:36 +0100
committerThomas Gleixner <tglx@linutronix.de>2013-02-21 20:52:24 +0100
commite5ab012c3271990e8457055c25cafddc1ae8aa6b (patch)
tree6c08a944d66fa68928a05b0792ad013c5e2a3da6
parent1a13c0b181f218bf56a1a6b8edbaf2876b22314b (diff)
nohz: Make tick_nohz_irq_exit() irq safe
As it stands, irq_exit() may or may not be called with irqs disabled, depending on __ARCH_IRQ_EXIT_IRQS_DISABLED that the arch can define. It makes tick_nohz_irq_exit() unsafe. For example two interrupts can race in tick_nohz_stop_sched_tick(): the inner most one computes the expiring time on top of the timer list, then it's interrupted right before reprogramming the clock. The new interrupt enqueues a new timer list timer, it reprogram the clock to take it into account and it exits. The CPUs resumes the inner most interrupt and performs the clock reprogramming without considering the new timer list timer. This regression has been introduced by: 280f06774afedf849f0b34248ed6aff57d0f6908 ("nohz: Separate out irq exit and idle loop dyntick logic") Let's fix it right now with the appropriate protections. A saner long term solution will be to remove __ARCH_IRQ_EXIT_IRQS_DISABLED and mandate that irq_exit() is called with interrupts disabled. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Linus Torvalds <torvalds@linuxfoundation.org> Cc: <stable@vger.kernel.org> #v3.2+ Link: http://lkml.kernel.org/r/1361373336-11337-1-git-send-email-fweisbec@gmail.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-rw-r--r--kernel/time/tick-sched.c7
1 files changed, 6 insertions, 1 deletions
diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
index 314b9ee07edf..520592ab6aa4 100644
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -565,14 +565,19 @@ void tick_nohz_idle_enter(void)
*/
void tick_nohz_irq_exit(void)
{
+ unsigned long flags;
struct tick_sched *ts = &__get_cpu_var(tick_cpu_sched);
if (!ts->inidle)
return;
- /* Cancel the timer because CPU already waken up from the C-states*/
+ local_irq_save(flags);
+
+ /* Cancel the timer because CPU already waken up from the C-states */
menu_hrtimer_cancel();
__tick_nohz_idle_enter(ts);
+
+ local_irq_restore(flags);
}
/**