Both mem_cgroup_charge_statistics() and mem_cgroup_move_account() were
commit8353bd0faf439dcb554d40c75bb93d83b3637c79
authorGreg Thelen <gthelen@google.com>
Wed, 24 Aug 2011 23:47:43 +0000 (25 09:47 +1000)
committerStephen Rothwell <sfr@canb.auug.org.au>
Wed, 31 Aug 2011 04:27:43 +0000 (31 14:27 +1000)
tree081a52b52169104b08c018fa3abc3bceeb41d450
parent7059f119d3dcbe95825d70f93056f335c4a3ed5a
Both mem_cgroup_charge_statistics() and mem_cgroup_move_account() were
unnecessarily disabling preemption when adjusting per-cpu counters:
    preempt_disable()
    __this_cpu_xxx()
    __this_cpu_yyy()
    preempt_enable()

This change does not disable preemption and thus CPU switch is possible
within these routines.  This does not cause a problem because the total
of all cpu counters is summed when reporting stats.  Now both
mem_cgroup_charge_statistics() and mem_cgroup_move_account() look like:
    this_cpu_xxx()
    this_cpu_yyy()

akpm: this is an optimisation for x86 and a deoptimisation for non-x86.
The non-x86 situation will be fixed as architectures implement their
atomic this_cpu_foo() operations.

Signed-off-by: Greg Thelen <gthelen@google.com>
Reported-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Johannes Weiner <jweiner@redhat.com>
Cc: Valdis Kletnieks <Valdis.Kletnieks@vt.edu>
Cc: Balbir Singh <bsingharora@gmail.com>
Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/memcontrol.c