On Wed, 2005-07-06 at 12:20 +1000, Nigel Cunningham wrote:
> +
> +/*
> + * Save and restore processor state for secondary processors.
> + * IRQs (and therefore preemption) are already disabled
> + * when we enter here (IPI).
> + */
> +
> +static volatile int loop __nosavedata;
> +
> +void __smp_suspend_lowlevel(void * data)
> +{
> + __asm__( "movl %%ecx,%%cr3\n" ::"c"(__pa(swsusp_pg_dir)));
> +
> + if (test_suspend_state(SUSPEND_NOW_RESUMING)) {
> + BUG_ON(!irqs_disabled());
> + kernel_fpu_begin();
> + c_loops_per_jiffy_ref[_smp_processor_id()] = current_cpu_data.loops_per_jiffy;
> + atomic_inc(&suspend_cpu_counter);
> +
> + /* Only image copied back while we spin in this loop. Our
> + * task info should not be looked at while this is happening
> + * (which smp_processor_id() will do( */
> + while (test_suspend_state(SUSPEND_FREEZE_SMP)) {
> + cpu_relax();
> + barrier();
> + }
> +
> + while (atomic_read(&suspend_cpu_counter) != _smp_processor_id()) {
> + cpu_relax();
> + barrier();
> + }
> + my_saved_context = (unsigned char *) (suspend2_saved_contexts + _smp_processor_id());
> + for (loop = sizeof(struct suspend2_saved_context); loop--; loop)
> + *(((unsigned char *) &suspend2_saved_context) + loop - 1) = *(my_saved_context + loop - 1);
> + suspend2_restore_processor_context();
> + cpu_clear(_smp_processor_id(), per_cpu(cpu_tlbstate, _smp_processor_id()).active_mm->cpu_vm_mask);
> + load_cr3(swapper_pg_dir);
> + wbinvd();
> + __flush_tlb_all();
> + current_cpu_data.loops_per_jiffy = c_loops_per_jiffy_ref[_smp_processor_id()];
> + mtrr_restore_one_cpu();
> + atomic_dec(&suspend_cpu_counter);
> + } else { /* suspending */
> + BUG_ON(!irqs_disabled());
> + /*
> + *Save context and go back to idling.
> + * Note that we cannot leave the processor
> + * here. It must be able to receive IPIs if
> + * the LZF compression driver (eg) does a
> + * vfree after compressing the kernel etc
> + */
> + while (test_suspend_state(SUSPEND_FREEZE_SMP) &&
> + (atomic_read(&suspend_cpu_counter) != (_smp_processor_id() - 1))) {
> + cpu_relax();
> + barrier();
> + }
> + suspend2_save_processor_context();
> + my_saved_context = (unsigned char *) (suspend2_saved_contexts + _smp_processor_id());
> + for (loop = sizeof(struct suspend2_saved_context); loop--; loop)
> + *(my_saved_context + loop - 1) = *(((unsigned char *) &suspend2_saved_context) + loop - 1);
> + atomic_inc(&suspend_cpu_counter);
> + /* Now spin until the atomic copy of the kernel is made. */
> + while (test_suspend_state(SUSPEND_FREEZE_SMP)) {
> + cpu_relax();
> + barrier();
> + }
> + atomic_dec(&suspend_cpu_counter);
> + kernel_fpu_end();
> + }
> +}
we are using cpu hotplug for S3 & S4 SMP to avoid nasty deadlocks. Could
this be avoided in suspend2 SMP?
Thanks,
Shaohua
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
|
|