Both mem_cgroup_charge_statistics() and mem_cgroup_move_account() were
unnecessarily disabling preemption when adjusting per-cpu counters:
preempt_disable()
__this_cpu_xxx()
__this_cpu_yyy()
preempt_enable()
This change does not disable preemption and thus CPU switch is possible
within these routines. This does not cause a problem because the total
of all cpu counters is summed when reporting stats. Now both
mem_cgroup_charge_statistics() and mem_cgroup_move_account() look like:
this_cpu_xxx()
this_cpu_yyy()
akpm: this is an optimisation for x86 and a deoptimisation for non-x86.
The non-x86 situation will be fixed as architectures implement their
atomic this_cpu_foo() operations.
Signed-off-by: Greg Thelen <gthelen@google.com>
Reported-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Johannes Weiner <jweiner@redhat.com>
Cc: Valdis Kletnieks <Valdis.Kletnieks@vt.edu>
Cc: Balbir Singh <bsingharora@gmail.com>
Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>