Age | Commit message (Collapse) | Author |
|
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
<linux/rculist.h> in <linux/sched.h>
We don't actually need the full rculist.h header in sched.h anymore,
we will be able to include the smaller rcupdate.h header instead.
But first update code that relied on the implicit header inclusion.
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
Patch series "mm: unexport __get_user_pages_unlocked()".
This patch series continues the cleanup of get_user_pages*() functions
taking advantage of the fact we can now pass gup_flags as we please.
It firstly adds an additional 'locked' parameter to
get_user_pages_remote() to allow for its callers to utilise
VM_FAULT_RETRY functionality. This is necessary as the invocation of
__get_user_pages_unlocked() in process_vm_rw_single_vec() makes use of
this and no other existing higher level function would allow it to do
so.
Secondly existing callers of __get_user_pages_unlocked() are replaced
with the appropriate higher-level replacement -
get_user_pages_unlocked() if the current task and memory descriptor are
referenced, or get_user_pages_remote() if other task/memory descriptors
are referenced (having acquiring mmap_sem.)
This patch (of 2):
Add a int *locked parameter to get_user_pages_remote() to allow
VM_FAULT_RETRY faulting behaviour similar to get_user_pages_[un]locked().
Taking into account the previous adjustments to get_user_pages*()
functions allowing for the passing of gup_flags, we are now in a
position where __get_user_pages_unlocked() need only be exported for his
ability to allow VM_FAULT_RETRY behaviour, this adjustment allows us to
subsequently unexport __get_user_pages_unlocked() as well as allowing
for future flexibility in the use of get_user_pages_remote().
[sfr@canb.auug.org.au: merge fix for get_user_pages_remote API change]
Link: http://lkml.kernel.org/r/20161122210511.024ec341@canb.auug.org.au
Link: http://lkml.kernel.org/r/20161027095141.2569-2-lstoakes@gmail.com
Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Hugh Dickins <hughd@google.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krcmar <rkrcmar@redhat.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
This removes the 'write' and 'force' from get_user_pages_remote() and
replaces them with 'gup_flags' to make the use of FOLL_FORCE explicit in
callers as use of this flag can result in surprising behaviour (and
hence bugs) within the mm subsystem.
Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
For protection keys, we need to understand whether protections
should be enforced in software or not. In general, we enforce
protections when working on our own task, but not when on others.
We call these "current" and "remote" operations.
This patch introduces a new get_user_pages() variant:
get_user_pages_remote()
Which is a replacement for when get_user_pages() is called on
non-current tsk/mm.
We also introduce a new gup flag: FOLL_REMOTE which can be used
for the "__" gup variants to get this new behavior.
The uprobes is_trap_at_addr() location holds mmap_sem and
calls get_user_pages(current->mm) on an instruction address. This
makes it a pretty unique gup caller. Being an instruction access
and also really originating from the kernel (vs. the app), I opted
to consider this a 'remote' access where protection keys will not
be enforced.
Without protection keys, this patch should not change any behavior.
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Dave Hansen <dave@sr71.net>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: jack@suse.cz
Cc: linux-mm@kvack.org
Link: http://lkml.kernel.org/r/20160212210154.3F0E51EA@viggo.jf.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
Acked-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: Cong Wang <amwang@redhat.com>
|
|
Commit bd03a3e4 "TOMOYO: Add policy namespace support." introduced policy
namespace. But as of /sbin/modprobe is executed from initramfs/initrd, profiles
for target domain's namespace is not defined because /sbin/tomoyo-init is not
yet called.
Reported-by: Jamie Nguyen <jamie@tomoyolinux.co.uk>
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
When TOMOYO started using garbage collector at commit 847b173e "TOMOYO: Add
garbage collector.", we waited for close() before kfree(). Thus, elements to be
kfree()d were queued up using tomoyo_gc_list list.
But it turned out that tomoyo_element_linked_by_gc() tends to choke garbage
collector when certain pattern of entries are queued.
Since garbage collector is no longer waiting for close() since commit 2e503bbb
"TOMOYO: Fix lockdep warning.", we can remove tomoyo_gc_list list and
tomoyo_element_linked_by_gc() by doing sequential processing.
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
Commit efe836ab "TOMOYO: Add built-in policy support." introduced
tomoyo_load_builtin_policy() but was by error called from nowhere.
Commit b22b8b9f "TOMOYO: Rename meminfo to stat and show more statistics."
introduced tomoyo_update_stat() but was by error not called from
tomoyo_assign_domain().
Also, mark tomoyo_io_printf() and tomoyo_path_permission() static functions,
as reported by "make namespacecheck".
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
I got an opinion that it is difficult to use exception policy's domain
transition control directives because they need to match the pathname specified
to "file execute" directives. For example, if "file execute /bin/\*\-ls\-cat"
is given, corresponding domain transition control directive needs to be like
"no_keep_domain /bin/\*\-ls\-cat from any".
If we can specify like below, it will become more convenient.
file execute /bin/ls keep exec.realpath="/bin/ls" exec.argv[0]="ls"
file execute /bin/cat keep exec.realpath="/bin/cat" exec.argv[0]="cat"
file execute /bin/\*\-ls\-cat child
file execute /usr/sbin/httpd <apache> exec.realpath="/usr/sbin/httpd" exec.argv[0]="/usr/sbin/httpd"
In above examples, "keep" works as if keep_domain is specified, "child" works
as if "no_reset_domain" and "no_initialize_domain" and "no_keep_domain" are
specified, "<apache>" causes domain transition to <apache> domain upon
successful execve() operation.
Moreover, we can also allow transition to different domains based on conditions
like below example.
<kernel> /usr/sbin/sshd
file execute /bin/bash <kernel> /usr/sbin/sshd //batch-session exec.argc=2 exec.argv[1]="-c"
file execute /bin/bash <kernel> /usr/sbin/sshd //root-session task.uid=0
file execute /bin/bash <kernel> /usr/sbin/sshd //nonroot-session task.uid!=0
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
There was a race window that the pathname which is subjected to "file execute"
permission check when retrying via supervisor's decision because the pathname
was recalculated upon retry. Though, there is an inevitable race window even
without supervisor, for we have to calculate the symbolic link's pathname from
"struct linux_binprm"->filename rather than from "struct linux_binprm"->file
because we cannot back calculate the symbolic link's pathname from the
dereferenced pathname.
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
basis.
Add per-entry flag which controls generation of grant logs because Xen and KVM
issues ioctl requests so frequently. For example,
file ioctl /dev/null 0x5401 grant_log=no
will suppress /sys/kernel/security/tomoyo/audit even if preference says
grant_log=yes .
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
This patch adds support for checking environment variable's names.
Although TOMOYO already provides ability to check argv[]/envp[] passed to
execve() requests,
file execute /bin/sh exec.envp["LD_LIBRARY_PATH"]="bar"
will reject execution of /bin/sh if environment variable LD_LIBRARY_PATH is not
defined. To grant execution of /bin/sh if LD_LIBRARY_PATH is not defined,
administrators have to specify like
file execute /bin/sh exec.envp["LD_LIBRARY_PATH"]="/system/lib"
file execute /bin/sh exec.envp["LD_LIBRARY_PATH"]=NULL
. Since there are many environment variables whereas conditional checks are
applied as "&&", it is difficult to cover all combinations. Therefore, this
patch supports conditional checks that are applied as "||", by specifying like
file execute /bin/sh
misc env LD_LIBRARY_PATH exec.envp["LD_LIBRARY_PATH"]="/system/lib"
which means "grant execution of /bin/sh if environment variable is not defined
or is defined and its value is /system/lib".
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
Update comments for scripts/kernel-doc and fix some of errors reported by
scripts/checkpatch.pl .
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
Enable conditional ACL by passing object's pointers.
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
This patch adds support for permission checks using argv[]/envp[] of execve()
request. Hooks are in the last patch of this pathset.
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
This patch adds support for permission checks using current thread's UID/GID
etc. in addition to pathnames.
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
Gather string constants to one file in order to make the object size smaller.
Use unsigned type where appropriate.
read()/write() returns ssize_t.
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
Mauras Olivier reported that it is difficult to use TOMOYO in LXC environments,
for TOMOYO cannot distinguish between environments outside the container and
environments inside the container since LXC environments are created using
pivot_root(). To address this problem, this patch introduces policy namespace.
Each policy namespace has its own set of domain policy, exception policy and
profiles, which are all independent of other namespaces. This independency
allows users to develop policy without worrying interference among namespaces.
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
ACL group allows administrator to globally grant not only "file read"
permission but also other permissions.
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
Convert "allow_..." style directives to "file ..." style directives.
By converting to the latter style, we can pack policy like
"file read/write/execute /path/to/file".
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
Use structure for passing ACL line, in preparation for supporting policy
namespace and conditional parameters.
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
Use common structure for ACL with "struct list_head" + "atomic_t".
Use array/struct where possible.
Remove is_group from "struct tomoyo_name_union"/"struct tomoyo_number_union".
Pass "struct file"->private_data rather than "struct file".
Update some of comments.
Bring tomoyo_same_acl_head() from common.h to domain.c .
Bring tomoyo_invalid()/tomoyo_valid() from common.h to util.c .
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
In order to synchronize with TOMOYO 1.8's syntax,
(1) Remove special handling for allow_read/write permission.
(2) Replace deny_rewrite/allow_rewrite permission with allow_append permission.
(3) Remove file_pattern keyword.
(4) Remove allow_read permission from exception policy.
(5) Allow creating domains in enforcing mode without calling supervisor.
(6) Add permission check for opening directory for reading.
(7) Add permission check for stat() operation.
(8) Make "cat < /sys/kernel/security/tomoyo/self_domain" behave as if
"cat /sys/kernel/security/tomoyo/self_domain".
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
Commit c9e69318 "TOMOYO: Allow wildcard for execute permission." changed execute
permission and domainname to accept wildcards. But tomoyo_find_next_domain()
was using pathname passed to execve() rather than pathname specified by the
execute permission. As a result, processes were not able to transit to domains
which contain wildcards in their domainnames.
This patch passes pathname specified by the execute permission back to
tomoyo_find_next_domain() so that processes can transit to domains which
contain wildcards in their domainnames.
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
Use shorter name in order to make it easier to fit 80 columns limit.
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
Use common code for "initialize_domain"/"no_initialize_domain"/"keep_domain"/
"no_keep_domain" keywords.
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
Some programs behave differently depending on argv[0] passed to execve().
TOMOYO has "alias" keyword in order to allow administrators to define different
domains if requested pathname passed to execve() is a symlink. But "alias"
keyword is incomplete because this keyword assumes that requested pathname and
argv[0] are identical. Thus, remove "alias" keyword (by this patch) and add
syntax for checking argv[0] (by future patches).
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
Now lists are accessible via array index. Aggregate reader functions using index.
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
Assign list id and make the lists as array of "struct list_head".
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
Use shorter name in order to make it easier to fix 80 columns limit.
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
We can use callback function since parameters are passed via
"const struct tomoyo_request_info".
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
tomoyo_file_perm() and tomoyo_path_permission() are similar.
We can embed tomoyo_file_perm() into tomoyo_path_permission().
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
Use common code for elements using "struct list_head" + "bool" structure.
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
Use common "struct list_head" + "bool" structure.
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
Use common "struct list_head" + "bool" + "u8" structure and
use common code for elements using that structure.
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
This patch allows users to change access control mode for per-operation basis.
This feature comes from non LSM version of TOMOYO which is designed for
permitting users to use SELinux and TOMOYO at the same time.
SELinux does not care filename in a directory whereas TOMOYO does. Change of
filename can change how the file is used. For example, renaming index.txt to
.htaccess will change how the file is used. Thus, letting SELinux to enforce
read()/write()/mmap() etc. restriction and letting TOMOYO to enforce rename()
restriction is an example usage of this feature.
What is unfortunate for me is that currently LSM does not allow users to use
SELinux and LSM version of TOMOYO at the same time...
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
This patch allows users to aggregate programs which provide similar
functionality (e.g. /usr/bin/vi and /usr/bin/emacs ).
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
Some applications create and execute programs dynamically. We need to accept
wildcard for execute permission because such programs contain random suffix
in their filenames. This patch loosens up regulation of string parameters.
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
Allow pathnames longer than 4000 bytes.
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
security/tomoyo/common.c became too large to read.
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
Since the behavior of the system is restricted by policy, we may need to update
policy when you update packages.
We need to update policy in the following cases.
* The pathname of files has changed.
* The dependency of files has changed.
* The access permissions required has increased.
The ideal way to update policy is to rebuild from the scratch using learning
mode. But it is not desirable to change from enforcing mode to other mode if
the system has once entered in production state. Suppose MAC could support
per-application enforcing mode, the MAC becomes useless if an application that
is not running in enforcing mode was cracked. For example, the whole system
becomes vulnerable if only HTTP server application is running in learning mode
to rebuild policy for the application. So, in TOMOYO Linux, updating policy is
done while the system is running in enforcing mode.
This patch implements "interactive enforcing mode" which allows administrators
to judge whether to accept policy violation in enforcing mode or not.
A demo movie is available at http://www.youtube.com/watch?v=b9q1Jo25LPA .
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
Use "struct tomoyo_request_info" instead of passing individual arguments.
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
Use stack memory for pending entry to reduce kmalloc() which will be kfree()d.
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
Some of TOMOYO's functions may sleep after mutex_lock(). If OOM-killer selected
a process which is waiting at mutex_lock(), the to-be-killed process can't be
killed. Thus, replace mutex_lock() with mutex_lock_interruptible() so that the
to-be-killed process can immediately return from TOMOYO's functions.
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
|
|
In Ubuntu, security_path_*() hooks are exported to Unionfs. Thus, prepare for
being called from inside VFS functions because I'm not sure whether it is safe
to use GFP_KERNEL or not.
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
|
|
__func__ is used for only debug printk(). We can remove it.
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: James Morris <jmorris@namei.org>
|
|
This patch adds garbage collector support to TOMOYO.
Elements are protected by "struct srcu_struct tomoyo_ss".
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Acked-by: Serge Hallyn <serue@us.ibm.com>
Signed-off-by: James Morris <jmorris@namei.org>
|