Age | Commit message (Collapse) | Author |
|
remove the 'u64 now' parameter from dequeue_entity().
( identity transformation that causes no change in functionality. )
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
remove the 'u64 now' parameter from enqueue_entity().
( identity transformation that causes no change in functionality. )
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
remove the 'u64 now' parameter from enqueue_sleeper().
( identity transformation that causes no change in functionality. )
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
remove the 'u64 now' parameter from __enqueue_sleeper().
( identity transformation that causes no change in functionality. )
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
remove the 'u64 now' parameter from update_stats_curr_end().
( identity transformation that causes no change in functionality. )
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
remove the 'u64 now' parameter from update_stats_dequeue().
( identity transformation that causes no change in functionality. )
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
remove the 'u64 now' parameter from update_stats_curr_start().
( identity transformation that causes no change in functionality. )
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
remove the 'u64 now' parameter from update_stats_wait_end().
( identity transformation that causes no change in functionality. )
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
remove the 'u64 now' parameter from __update_stats_wait_end().
( identity transformation that causes no change in functionality. )
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
remove the 'u64 now' parameter from update_stats_enqueue().
( identity transformation that causes no change in functionality. )
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
remove the 'u64 now' parameter from update_stats_wait_start().
( identity transformation that causes no change in functionality. )
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
remove the 'u64 now' parameter from update_curr().
( identity transformation that causes no change in functionality. )
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
remove the 'u64 now' parameter from print_cfs_rq().
( identity transformation that causes no change in functionality. )
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
change all 'now' timestamp uses in assignments to rq->clock.
( this is an identity transformation that causes no functionality change:
all such new rq->clock is necessarily preceded by an update_rq_clock()
call. )
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
remove the (now unused) __rq_clock() function.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
eliminate __rq_clock() use by changing it to:
__update_rq_clock(rq)
now = rq->clock;
identity transformation - no change in behavior.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
remove the now unused rq_clock() function.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
eliminate rq_clock() use by changing it to:
update_rq_clock(rq)
now = rq->clock;
identity transformation - no change in behavior.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
add the [__]update_rq_clock(rq) functions. (No change in functionality,
just reorganization to prepare for elimination of the heavy 64-bit
timestamp-passing in the scheduler.)
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
There are two problems with balance_tasks() and how it used:
1. The variables best_prio and best_prio_seen (inherited from the old
move_tasks()) were only required to handle problems caused by the
active/expired arrays, the order in which they were processed and the
possibility that the task with the highest priority could be on either.
These issues are no longer present and the extra overhead associated
with their use is unnecessary (and possibly wrong).
2. In the absence of CONFIG_FAIR_GROUP_SCHED being set, the same
this_best_prio variable needs to be used by all scheduling classes or
there is a risk of moving too much load. E.g. if the highest priority
task on this at the beginning is a fairly low priority task and the rt
class migrates a task (during its turn) then that moved task becomes the
new highest priority task on this_rq but when the sched_fair class
initializes its copy of this_best_prio it will get the priority of the
original highest priority task as, due to the run queue locks being
held, the reschedule triggered by pull_task() will not have taken place.
This could result in inappropriate overriding of skip_for_load and
excessive load being moved.
The attached patch addresses these problems by deleting all reference to
best_prio and best_prio_seen and making this_best_prio a reference
parameter to the various functions involved.
load_balance_fair() has also been modified so that this_best_prio is
only reset (in the loop) if CONFIG_FAIR_GROUP_SCHED is set. This should
preserve the effect of helping spread groups' higher priority tasks
around the available CPUs while improving system performance when
CONFIG_FAIR_GROUP_SCHED isn't set.
Signed-off-by: Peter Williams <pwil3058@bigpond.net.au>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
Document the design thinking behind nice levels.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
kernel.sched_domain hierarchy is under CTL_UNNUMBERED and thus
unreachable to sysctl(2). Generating .ctl_number's in such situation is
not useful.
Signed-off-by: Alexey Dobriyan <adobriyan@sw.ru>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
small delta_exec accounting fix: increase delta_exec and increase
sum_exec_runtime even if the task is not on the runqueue anymore.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
cleanup: delta_mine is an unsigned value.
no code impact:
text data bss dec hex filename
27823 2726 16 30565 7765 sched.o.before
27823 2726 16 30565 7765 sched.o.after
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
speed up schedule(): share the 'now' parameter that deactivate_task()
was calculating internally.
( this also fixes the small accounting window between the deactivate
call and the pick_next_task() call. )
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
uninline rq_clock() to save 263 bytes of code:
text data bss dec hex filename
39561 3642 24 43227 a8db sched.o.before
39298 3642 24 42964 a7d4 sched.o.after
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
sched_fair.c defines print_cfs_stats, and sched_debug.c uses it, but sched.c
includes both sched_fair.c and sched_debug.c, so all the references to
print_cfs_stats occur in the same compilation unit. Thus, mark
print_cfs_stats static.
Eliminates a sparse warning:
warning: symbol 'print_cfs_stats' was not declared. Should it be static?
Signed-off-by: Josh Triplett <josh@kernel.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
here's another tiny cleanup. The generated code is not affected (gcc is
smart enough) but for people looking over the code it is just irritating
to have the extra conditional.
Signed-off-by: Ulrich Drepper <drepper@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
a little hint to switch on CONFIG_SCHED_DEBUG should be given.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
The move_tasks() function is currently multiplexed with two distinct
capabilities:
1. attempt to move a specified amount of weighted load from one run
queue to another; and
2. attempt to move a specified number of tasks from one run queue to
another.
The first of these capabilities is used in two places, load_balance()
and load_balance_idle(), and in both of these cases the return value of
move_tasks() is used purely to decide if tasks/load were moved and no
notice of the actual number of tasks moved is taken.
The second capability is used in exactly one place,
active_load_balance(), to attempt to move exactly one task and, as
before, the return value is only used as an indicator of success or failure.
This multiplexing of sched_task() was introduced, by me, as part of the
smpnice patches and was motivated by the fact that the alternative, one
function to move specified load and one to move a single task, would
have led to two functions of roughly the same complexity as the old
move_tasks() (or the new balance_tasks()). However, the new modular
design of the new CFS scheduler allows a simpler solution to be adopted
and this patch addresses that solution by:
1. adding a new function, move_one_task(), to be used by
active_load_balance(); and
2. making move_tasks() a single purpose function that tries to move a
specified weighted load and returns 1 for success and 0 for failure.
One of the consequences of these changes is that neither move_one_task()
or the new move_tasks() care how many tasks sched_class.load_balance()
moves and this enables its interface to be simplified by returning the
amount of load moved as its result and removing the load_moved pointer
from the argument list. This helps simplify the new move_tasks() and
slightly reduces the amount of work done in each of
sched_class.load_balance()'s implementations.
Further simplification, e.g. changes to balance_tasks(), are possible
but (slightly) complicated by the special needs of load_balance_fair()
so I've left them to a later patch (if this one gets accepted).
NB Since move_tasks() gets called with two run queue locks held even
small reductions in overhead are worthwhile.
[ mingo@elte.hu ]
this change also reduces code size nicely:
text data bss dec hex filename
39216 3618 24 42858 a76a sched.o.before
39173 3618 24 42815 a73f sched.o.after
Signed-off-by: Peter Williams <pwil3058@bigpond.net.au>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
Peter Williams suggested to flip the order of update_cpu_load(rq) with
the ->task_tick() call. This is a NOP for the current scheduler (the
two functions are independent of each other), ->task_tick() might
create some state for update_cpu_load() in the future (or in PlugSched).
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
batch up the sleeper bonus sum a bit more. Anything below
sched-granularity is too small to make a practical difference
anyway.
this optimization reduces the math in high-frequency scheduling
scenarios.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
Every time a cpu is added via hotplug, we allocate the per-cpu MONDO
queues but we never free them up. Freeing isn't easy since the first
cpu gets this memory from bootmem.
Therefore, the simplest thing to do to fix this bug is to allocate the
queues for all possible cpus at boot time.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Check the cpu type in the OBP device tree before committing to
using the optimized Niagara memcpy and memset implementation.
If we don't recognize the cpu type, use a completely generic
version.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The check for audit_signals is misplaced and the check for
audit_dummy_context() is missing; as the result, if we send a signal to
auditd from task with NULL ->audit_context while we have audit_signals
!= 0 we end up with an oops.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Acked-by: James Morris <jmorris@namei.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Small patch to H-TCP from Douglas Leith.
Fix estimation of maxRTT. The original code ignores rtt measurements
during slow start (via the check tp->snd_ssthresh < 0xFFFF) yet this
is probably a good time to try to estimate max rtt as delayed acking
is disabled and slow start will only exit on a loss which presumably
corresponds to a maxrtt measurement. Second, the original code (via
the check htcp_ccount(ca) > 3) ignores rtt data during what it
estimates to be the first 3 round-trip times. This seems like an
unnecessary check now that the RCV timestamp are no longer used
for rtt estimation.
Signed-off-by: Stephen Hemminger <shemminger@linux-foundation.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Loading nf_nat causes the conntrack core to be loaded, but we need IPv4 as
well.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
conntracks
ctnetlink must return EEXIST for existing nat'ed conntracks instead of
EINVAL. Only return EINVAL if we try to update a conntrack with NAT
handlings (that is not allowed).
Decadence:libnetfilter_conntrack/utils# ./conntrack_create_nat
TEST: create conntrack (0)(Success)
Decadence:libnetfilter_conntrack/utils# ./conntrack_create_nat
TEST: create conntrack (-1)(Invalid argument)
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
recent_seq_open()
If the call to seq_open() returns != 0 then the code calls
kfree(st) but then on the very next line proceeds to
dereference the pointer - not good.
Problem spotted by the Coverity checker.
Signed-off-by: Jesper Juhl <jesper.juhl@gmail.com>
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
master.kernel.org:/pub/scm/linux/kernel/git/linville/wireless-2.6
|
|
net_msg_warn is not defined because it is in net/sock.h which isn't
included.
Signed-off-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
hash table
The LSM domain mapping head table pointer was not being referenced via the RCU
safe dereferencing function, rcu_dereference(). This patch adds those missing
calls to the NetLabel code.
This has been tested using recent linux-2.6 git kernels with no visible
regressions.
Signed-off-by: Paul Moore <paul.moore@hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
> >> Looks like memset() is zeroing wrong nr of bytes.
> >
> > Good catch, however, I think we can just remove this memset altogether
> > since the memory gets allocated via kzalloc.
>
> Correct, that memset() is superfluous.
Ok. Then this should do it.
Signed-off-by: Mariusz Kozlowski <m.kozlowski@tuxland.pl>
drivers/net/ibmveth.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
Signed-off-by: Jeff Garzik <jeff@garzik.org>
|
|
Fix a thinko (?) in setting phydev->autoneg.
Signed-off-by: Domen Puncer <domen.puncer@telargo.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
|
|
Fixed wrongly casted pointers
Signed-off-by: Thomas Klein <tklein@de.ibm.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
|
|
Use shorter method to determine whether adapter has configured ports
Signed-off-by: Thomas Klein <tklein@de.ibm.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
|
|
Fix: Workqueue ehea_driver_wq was not destroyed
Signed-off-by: Thomas Klein <tklein@de.ibm.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
|
|
This fixes the following oops which can occur when trying to deallocate
receive buffer pools using sysfs with the ibmveth driver.
NIP: d00000000024f954 LR: d00000000024fa58 CTR: c0000000000d7478
REGS: c00000000ffef9f0 TRAP: 0300 Not tainted (2.6.22-ppc64)
MSR: 8000000000009032 <EE,ME,IR,DR> CR: 24242442 XER: 00000010
DAR: 00000000000007f0, DSISR: 0000000042000000
TASK = c000000002f91360[2967] 'bash' THREAD: c00000001398c000 CPU: 2
GPR00: 0000000000000000 c00000000ffefc70 d000000000262d30 c00000001c4087a0
GPR04: 00000003000000fe 0000000000000000 000000000000000f c000000000579d80
GPR08: 0000000000365688 c00000001c408998 00000000000007f0 0000000000000000
GPR12: d000000000251e88 c000000000579d80 00000000200957ec 0000000000000000
GPR16: 00000000100b8808 00000000100feb30 0000000000000000 0000000010084828
GPR20: 0000000000000000 000000001014d4d0 0000000000000010 c00000000ffefeb0
GPR24: c00000001c408000 0000000000000000 c00000001c408000 00000000ffffb054
GPR28: 00000000000000fe 0000000000000003 d000000000262700 c00000001c4087a0
NIP [d00000000024f954] .ibmveth_remove_buffer_from_pool+0x38/0x108 [ibmveth]
LR [d00000000024fa58] .ibmveth_rxq_harvest_buffer+0x34/0x78 [ibmveth]
Call Trace:
[c00000000ffefc70] [c0000000000280a8] .dma_iommu_unmap_single+0x14/0x28 (unreliable)
[c00000000ffefd00] [d00000000024fa58] .ibmveth_rxq_harvest_buffer+0x34/0x78 [ibmveth]
[c00000000ffefd80] [d000000000250e40] .ibmveth_poll+0xd8/0x434 [ibmveth]
[c00000000ffefe40] [c00000000032da8c] .net_rx_action+0xdc/0x248
[c00000000ffefef0] [c000000000068b4c] .__do_softirq+0xa8/0x164
[c00000000ffeff90] [c00000000002789c] .call_do_softirq+0x14/0x24
[c00000001398f6f0] [c00000000000c04c] .do_softirq+0x68/0xac
[c00000001398f780] [c000000000068ca0] .irq_exit+0x54/0x6c
[c00000001398f800] [c00000000000c8e4] .do_IRQ+0x170/0x1ac
[c00000001398f890] [c000000000004790] hardware_interrupt_entry+0x18/0x1c
Exception: 501 at .plpar_hcall_norets+0x24/0x94
LR = .veth_pool_store+0x15c/0x298 [ibmveth]
[c00000001398fb80] [d000000000250b2c] .veth_pool_store+0x5c/0x298 [ibmveth] (unreliable)
[c00000001398fc30] [c000000000145530] .sysfs_write_file+0x140/0x1d8
[c00000001398fcf0] [c0000000000de89c] .vfs_write+0x120/0x208
[c00000001398fd90] [c0000000000df2c8] .sys_write+0x4c/0x8c
[c00000001398fe30] [c0000000000086ac] syscall_exit+0x0/0x40
Instruction dump:
fba1ffe8 fbe1fff8 789d0022 f8010010 f821ff71 789c0020 1d3d00a8 7b8a1f24
38000000 7c7f1b78 7d291a14 e9690128 <7c0a592a> e8030000 e9690120 80a90100
Signed-off-by: Brian King <brking@linux.vnet.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
|