Skip to content

Commit

Permalink
mm, treewide: rename MAX_ORDER to MAX_PAGE_ORDER
Browse files Browse the repository at this point in the history
commit 23baf83 ("mm, treewide: redefine MAX_ORDER sanely") has
changed the definition of MAX_ORDER to be inclusive.  This has caused
issues with code that was not yet upstream and depended on the previous
definition.

To draw attention to the altered meaning of the define, rename MAX_ORDER
to MAX_PAGE_ORDER.

Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Kirill A. Shutemov <[email protected]>
Cc: Linus Torvalds <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
  • Loading branch information
kiryl authored and akpm00 committed Jan 8, 2024
1 parent fd37721 commit 5e0a760
Show file tree
Hide file tree
Showing 76 changed files with 186 additions and 181 deletions.
2 changes: 1 addition & 1 deletion Documentation/admin-guide/kdump/vmcoreinfo.rst
Original file line number Diff line number Diff line change
Expand Up @@ -193,7 +193,7 @@ from this.
--------------------------------

Free areas descriptor. User-space tools use this value to iterate the
free_area ranges. MAX_ORDER is used by the zone buddy allocator.
free_area ranges. NR_PAGE_ORDERS is used by the zone buddy allocator.

prb
---
Expand Down
24 changes: 12 additions & 12 deletions Documentation/admin-guide/kernel-parameters.txt
Original file line number Diff line number Diff line change
Expand Up @@ -970,17 +970,17 @@
buddy allocator. Bigger value increase the probability
of catching random memory corruption, but reduce the
amount of memory for normal system use. The maximum
possible value is MAX_ORDER/2. Setting this parameter
to 1 or 2 should be enough to identify most random
memory corruption problems caused by bugs in kernel or
driver code when a CPU writes to (or reads from) a
random memory location. Note that there exists a class
of memory corruptions problems caused by buggy H/W or
F/W or by drivers badly programming DMA (basically when
memory is written at bus level and the CPU MMU is
bypassed) which are not detectable by
CONFIG_DEBUG_PAGEALLOC, hence this option will not help
tracking down these problems.
possible value is MAX_PAGE_ORDER/2. Setting this
parameter to 1 or 2 should be enough to identify most
random memory corruption problems caused by bugs in
kernel or driver code when a CPU writes to (or reads
from) a random memory location. Note that there exists
a class of memory corruptions problems caused by buggy
H/W or F/W or by drivers badly programming DMA
(basically when memory is written at bus level and the
CPU MMU is bypassed) which are not detectable by
CONFIG_DEBUG_PAGEALLOC, hence this option will not
help tracking down these problems.

debug_pagealloc=
[KNL] When CONFIG_DEBUG_PAGEALLOC is set, this parameter
Expand Down Expand Up @@ -4136,7 +4136,7 @@
[KNL] Minimal page reporting order
Format: <integer>
Adjust the minimal page reporting order. The page
reporting is disabled when it exceeds MAX_ORDER.
reporting is disabled when it exceeds MAX_PAGE_ORDER.

panic= [KNL] Kernel behaviour on panic: delay <timeout>
timeout > 0: seconds before rebooting
Expand Down
14 changes: 7 additions & 7 deletions Documentation/networking/packet_mmap.rst
Original file line number Diff line number Diff line change
Expand Up @@ -263,20 +263,20 @@ the name indicates, this function allocates pages of memory, and the second
argument is "order" or a power of two number of pages, that is
(for PAGE_SIZE == 4096) order=0 ==> 4096 bytes, order=1 ==> 8192 bytes,
order=2 ==> 16384 bytes, etc. The maximum size of a
region allocated by __get_free_pages is determined by the MAX_ORDER macro. More
precisely the limit can be calculated as::
region allocated by __get_free_pages is determined by the MAX_PAGE_ORDER macro.
More precisely the limit can be calculated as::

PAGE_SIZE << MAX_ORDER
PAGE_SIZE << MAX_PAGE_ORDER

In a i386 architecture PAGE_SIZE is 4096 bytes
In a 2.4/i386 kernel MAX_ORDER is 10
In a 2.6/i386 kernel MAX_ORDER is 11
In a 2.4/i386 kernel MAX_PAGE_ORDER is 10
In a 2.6/i386 kernel MAX_PAGE_ORDER is 11

So get_free_pages can allocate as much as 4MB or 8MB in a 2.4/2.6 kernel
respectively, with an i386 architecture.

User space programs can include /usr/include/sys/user.h and
/usr/include/linux/mmzone.h to get PAGE_SIZE MAX_ORDER declarations.
/usr/include/linux/mmzone.h to get PAGE_SIZE MAX_PAGE_ORDER declarations.

The pagesize can also be determined dynamically with the getpagesize (2)
system call.
Expand Down Expand Up @@ -324,7 +324,7 @@ Definitions:
(see /proc/slabinfo)
<pointer size> depends on the architecture -- ``sizeof(void *)``
<page size> depends on the architecture -- PAGE_SIZE or getpagesize (2)
<max-order> is the value defined with MAX_ORDER
<max-order> is the value defined with MAX_PAGE_ORDER
<frame size> it's an upper bound of frame's capture size (more on this later)
============== ================================================================

Expand Down
2 changes: 1 addition & 1 deletion arch/arm/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -1362,7 +1362,7 @@ config ARCH_FORCE_MAX_ORDER
default "10"
help
The kernel page allocator limits the size of maximal physically
contiguous allocations. The limit is called MAX_ORDER and it
contiguous allocations. The limit is called MAX_PAGE_ORDER and it
defines the maximal power of two of number of pages that can be
allocated as a single contiguous block. This option allows
overriding the default setting when ability to allocate very
Expand Down
20 changes: 10 additions & 10 deletions arch/arm64/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -1520,32 +1520,32 @@ config XEN

# include/linux/mmzone.h requires the following to be true:
#
# MAX_ORDER + PAGE_SHIFT <= SECTION_SIZE_BITS
# MAX_PAGE_ORDER + PAGE_SHIFT <= SECTION_SIZE_BITS
#
# so the maximum value of MAX_ORDER is SECTION_SIZE_BITS - PAGE_SHIFT:
# so the maximum value of MAX_PAGE_ORDER is SECTION_SIZE_BITS - PAGE_SHIFT:
#
# | SECTION_SIZE_BITS | PAGE_SHIFT | max MAX_ORDER | default MAX_ORDER |
# ----+-------------------+--------------+-----------------+--------------------+
# 4K | 27 | 12 | 15 | 10 |
# 16K | 27 | 14 | 13 | 11 |
# 64K | 29 | 16 | 13 | 13 |
# | SECTION_SIZE_BITS | PAGE_SHIFT | max MAX_PAGE_ORDER | default MAX_PAGE_ORDER |
# ----+-------------------+--------------+----------------------+-------------------------+
# 4K | 27 | 12 | 15 | 10 |
# 16K | 27 | 14 | 13 | 11 |
# 64K | 29 | 16 | 13 | 13 |
config ARCH_FORCE_MAX_ORDER
int
default "13" if ARM64_64K_PAGES
default "11" if ARM64_16K_PAGES
default "10"
help
The kernel page allocator limits the size of maximal physically
contiguous allocations. The limit is called MAX_ORDER and it
contiguous allocations. The limit is called MAX_PAGE_ORDER and it
defines the maximal power of two of number of pages that can be
allocated as a single contiguous block. This option allows
overriding the default setting when ability to allocate very
large blocks of physically contiguous memory is required.

The maximal size of allocation cannot exceed the size of the
section, so the value of MAX_ORDER should satisfy
section, so the value of MAX_PAGE_ORDER should satisfy

MAX_ORDER + PAGE_SHIFT <= SECTION_SIZE_BITS
MAX_PAGE_ORDER + PAGE_SHIFT <= SECTION_SIZE_BITS

Don't change if unsure.

Expand Down
2 changes: 1 addition & 1 deletion arch/arm64/include/asm/sparsemem.h
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
/*
* Section size must be at least 512MB for 64K base
* page size config. Otherwise it will be less than
* MAX_ORDER and the build process will fail.
* MAX_PAGE_ORDER and the build process will fail.
*/
#ifdef CONFIG_ARM64_64K_PAGES
#define SECTION_SIZE_BITS 29
Expand Down
3 changes: 2 additions & 1 deletion arch/arm64/kvm/hyp/nvhe/page_alloc.c
Original file line number Diff line number Diff line change
Expand Up @@ -228,7 +228,8 @@ int hyp_pool_init(struct hyp_pool *pool, u64 pfn, unsigned int nr_pages,
int i;

hyp_spin_lock_init(&pool->lock);
pool->max_order = min(MAX_ORDER, get_order(nr_pages << PAGE_SHIFT));
pool->max_order = min(MAX_PAGE_ORDER,
get_order(nr_pages << PAGE_SHIFT));
for (i = 0; i <= pool->max_order; i++)
INIT_LIST_HEAD(&pool->free_area[i]);
pool->range_start = phys;
Expand Down
2 changes: 1 addition & 1 deletion arch/arm64/mm/hugetlbpage.c
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ void __init arm64_hugetlb_cma_reserve(void)
* page allocator. Just warn if there is any change
* breaking this assumption.
*/
WARN_ON(order <= MAX_ORDER);
WARN_ON(order <= MAX_PAGE_ORDER);
hugetlb_cma_reserve(order);
}
#endif /* CONFIG_CMA */
Expand Down
2 changes: 1 addition & 1 deletion arch/m68k/Kconfig.cpu
Original file line number Diff line number Diff line change
Expand Up @@ -402,7 +402,7 @@ config ARCH_FORCE_MAX_ORDER
default "10"
help
The kernel page allocator limits the size of maximal physically
contiguous allocations. The limit is called MAX_ORDER and it
contiguous allocations. The limit is called MAX_PAGE_ORDER and it
defines the maximal power of two of number of pages that can be
allocated as a single contiguous block. This option allows
overriding the default setting when ability to allocate very
Expand Down
2 changes: 1 addition & 1 deletion arch/nios2/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ config ARCH_FORCE_MAX_ORDER
default "10"
help
The kernel page allocator limits the size of maximal physically
contiguous allocations. The limit is called MAX_ORDER and it
contiguous allocations. The limit is called MAX_PAGE_ORDER and it
defines the maximal power of two of number of pages that can be
allocated as a single contiguous block. This option allows
overriding the default setting when ability to allocate very
Expand Down
2 changes: 1 addition & 1 deletion arch/powerpc/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -915,7 +915,7 @@ config ARCH_FORCE_MAX_ORDER
default "10"
help
The kernel page allocator limits the size of maximal physically
contiguous allocations. The limit is called MAX_ORDER and it
contiguous allocations. The limit is called MAX_PAGE_ORDER and it
defines the maximal power of two of number of pages that can be
allocated as a single contiguous block. This option allows
overriding the default setting when ability to allocate very
Expand Down
2 changes: 1 addition & 1 deletion arch/powerpc/mm/book3s64/iommu_api.c
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua,
}

mmap_read_lock(mm);
chunk = (1UL << (PAGE_SHIFT + MAX_ORDER)) /
chunk = (1UL << (PAGE_SHIFT + MAX_PAGE_ORDER)) /
sizeof(struct vm_area_struct *);
chunk = min(chunk, entries);
for (entry = 0; entry < entries; entry += chunk) {
Expand Down
2 changes: 1 addition & 1 deletion arch/powerpc/mm/hugetlbpage.c
Original file line number Diff line number Diff line change
Expand Up @@ -615,7 +615,7 @@ void __init gigantic_hugetlb_cma_reserve(void)
order = mmu_psize_to_shift(MMU_PAGE_16G) - PAGE_SHIFT;

if (order) {
VM_WARN_ON(order <= MAX_ORDER);
VM_WARN_ON(order <= MAX_PAGE_ORDER);
hugetlb_cma_reserve(order);
}
}
2 changes: 1 addition & 1 deletion arch/powerpc/platforms/powernv/pci-ioda.c
Original file line number Diff line number Diff line change
Expand Up @@ -1389,7 +1389,7 @@ static long pnv_pci_ioda2_setup_default_config(struct pnv_ioda_pe *pe)
* DMA window can be larger than available memory, which will
* cause errors later.
*/
const u64 maxblock = 1UL << (PAGE_SHIFT + MAX_ORDER);
const u64 maxblock = 1UL << (PAGE_SHIFT + MAX_PAGE_ORDER);

/*
* We create the default window as big as we can. The constraint is
Expand Down
2 changes: 1 addition & 1 deletion arch/sh/mm/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ config ARCH_FORCE_MAX_ORDER
default "10"
help
The kernel page allocator limits the size of maximal physically
contiguous allocations. The limit is called MAX_ORDER and it
contiguous allocations. The limit is called MAX_PAGE:_ORDER and it
defines the maximal power of two of number of pages that can be
allocated as a single contiguous block. This option allows
overriding the default setting when ability to allocate very
Expand Down
2 changes: 1 addition & 1 deletion arch/sparc/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -277,7 +277,7 @@ config ARCH_FORCE_MAX_ORDER
default "12"
help
The kernel page allocator limits the size of maximal physically
contiguous allocations. The limit is called MAX_ORDER and it
contiguous allocations. The limit is called MAX_PAGE_ORDER and it
defines the maximal power of two of number of pages that can be
allocated as a single contiguous block. This option allows
overriding the default setting when ability to allocate very
Expand Down
2 changes: 1 addition & 1 deletion arch/sparc/kernel/pci_sun4v.c
Original file line number Diff line number Diff line change
Expand Up @@ -194,7 +194,7 @@ static void *dma_4v_alloc_coherent(struct device *dev, size_t size,

size = IO_PAGE_ALIGN(size);
order = get_order(size);
if (unlikely(order > MAX_ORDER))
if (unlikely(order > MAX_PAGE_ORDER))
return NULL;

npages = size >> IO_PAGE_SHIFT;
Expand Down
4 changes: 2 additions & 2 deletions arch/sparc/mm/tsb.c
Original file line number Diff line number Diff line change
Expand Up @@ -402,8 +402,8 @@ void tsb_grow(struct mm_struct *mm, unsigned long tsb_index, unsigned long rss)
unsigned long new_rss_limit;
gfp_t gfp_flags;

if (max_tsb_size > PAGE_SIZE << MAX_ORDER)
max_tsb_size = PAGE_SIZE << MAX_ORDER;
if (max_tsb_size > PAGE_SIZE << MAX_PAGE_ORDER)
max_tsb_size = PAGE_SIZE << MAX_PAGE_ORDER;

new_cache_index = 0;
for (new_size = 8192; new_size < max_tsb_size; new_size <<= 1UL) {
Expand Down
4 changes: 2 additions & 2 deletions arch/um/kernel/um_arch.c
Original file line number Diff line number Diff line change
Expand Up @@ -373,10 +373,10 @@ int __init linux_main(int argc, char **argv)
max_physmem = TASK_SIZE - uml_physmem - iomem_size - MIN_VMALLOC;

/*
* Zones have to begin on a 1 << MAX_ORDER page boundary,
* Zones have to begin on a 1 << MAX_PAGE_ORDER page boundary,
* so this makes sure that's true for highmem
*/
max_physmem &= ~((1 << (PAGE_SHIFT + MAX_ORDER)) - 1);
max_physmem &= ~((1 << (PAGE_SHIFT + MAX_PAGE_ORDER)) - 1);
if (physmem_size + iomem_size > max_physmem) {
highmem = physmem_size + iomem_size - max_physmem;
physmem_size -= highmem;
Expand Down
2 changes: 1 addition & 1 deletion arch/xtensa/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -793,7 +793,7 @@ config ARCH_FORCE_MAX_ORDER
default "10"
help
The kernel page allocator limits the size of maximal physically
contiguous allocations. The limit is called MAX_ORDER and it
contiguous allocations. The limit is called MAX_PAGE_ORDER and it
defines the maximal power of two of number of pages that can be
allocated as a single contiguous block. This option allows
overriding the default setting when ability to allocate very
Expand Down
2 changes: 1 addition & 1 deletion drivers/accel/qaic/qaic_data.c
Original file line number Diff line number Diff line change
Expand Up @@ -451,7 +451,7 @@ static int create_sgt(struct qaic_device *qdev, struct sg_table **sgt_out, u64 s
* later
*/
buf_extra = (PAGE_SIZE - size % PAGE_SIZE) % PAGE_SIZE;
max_order = min(MAX_ORDER - 1, get_order(size));
max_order = min(MAX_PAGE_ORDER - 1, get_order(size));
} else {
/* allocate a single page for book keeping */
nr_pages = 1;
Expand Down
8 changes: 4 additions & 4 deletions drivers/base/regmap/regmap-debugfs.c
Original file line number Diff line number Diff line change
Expand Up @@ -226,8 +226,8 @@ static ssize_t regmap_read_debugfs(struct regmap *map, unsigned int from,
if (*ppos < 0 || !count)
return -EINVAL;

if (count > (PAGE_SIZE << MAX_ORDER))
count = PAGE_SIZE << MAX_ORDER;
if (count > (PAGE_SIZE << MAX_PAGE_ORDER))
count = PAGE_SIZE << MAX_PAGE_ORDER;

buf = kmalloc(count, GFP_KERNEL);
if (!buf)
Expand Down Expand Up @@ -373,8 +373,8 @@ static ssize_t regmap_reg_ranges_read_file(struct file *file,
if (*ppos < 0 || !count)
return -EINVAL;

if (count > (PAGE_SIZE << MAX_ORDER))
count = PAGE_SIZE << MAX_ORDER;
if (count > (PAGE_SIZE << MAX_PAGE_ORDER))
count = PAGE_SIZE << MAX_PAGE_ORDER;

buf = kmalloc(count, GFP_KERNEL);
if (!buf)
Expand Down
2 changes: 1 addition & 1 deletion drivers/block/floppy.c
Original file line number Diff line number Diff line change
Expand Up @@ -3079,7 +3079,7 @@ static void raw_cmd_free(struct floppy_raw_cmd **ptr)
}
}

#define MAX_LEN (1UL << MAX_ORDER << PAGE_SHIFT)
#define MAX_LEN (1UL << MAX_PAGE_ORDER << PAGE_SHIFT)

static int raw_cmd_copyin(int cmd, void __user *param,
struct floppy_raw_cmd **rcmd)
Expand Down
2 changes: 1 addition & 1 deletion drivers/crypto/ccp/sev-dev.c
Original file line number Diff line number Diff line change
Expand Up @@ -906,7 +906,7 @@ static int sev_ioctl_do_get_id2(struct sev_issue_cmd *argp)
/*
* The length of the ID shouldn't be assumed by software since
* it may change in the future. The allocation size is limited
* to 1 << (PAGE_SHIFT + MAX_ORDER) by the page allocator.
* to 1 << (PAGE_SHIFT + MAX_PAGE_ORDER) by the page allocator.
* If the allocation fails, simply return ENOMEM rather than
* warning in the kernel log.
*/
Expand Down
6 changes: 3 additions & 3 deletions drivers/crypto/hisilicon/sgl.c
Original file line number Diff line number Diff line change
Expand Up @@ -70,11 +70,11 @@ struct hisi_acc_sgl_pool *hisi_acc_create_sgl_pool(struct device *dev,
HISI_ACC_SGL_ALIGN_SIZE);

/*
* the pool may allocate a block of memory of size PAGE_SIZE * 2^MAX_ORDER,
* the pool may allocate a block of memory of size PAGE_SIZE * 2^MAX_PAGE_ORDER,
* block size may exceed 2^31 on ia64, so the max of block size is 2^31
*/
block_size = 1 << (PAGE_SHIFT + MAX_ORDER < 32 ?
PAGE_SHIFT + MAX_ORDER : 31);
block_size = 1 << (PAGE_SHIFT + MAX_PAGE_ORDER < 32 ?
PAGE_SHIFT + MAX_PAGE_ORDER : 31);
sgl_num_per_block = block_size / sgl_size;
block_num = count / sgl_num_per_block;
remain_sgl = count % sgl_num_per_block;
Expand Down
2 changes: 1 addition & 1 deletion drivers/gpu/drm/i915/gem/i915_gem_internal.c
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
struct sg_table *st;
struct scatterlist *sg;
unsigned int npages; /* restricted by sg_alloc_table */
int max_order = MAX_ORDER;
int max_order = MAX_PAGE_ORDER;
unsigned int max_segment;
gfp_t gfp;

Expand Down
2 changes: 1 addition & 1 deletion drivers/gpu/drm/i915/gem/selftests/huge_pages.c
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ static int get_huge_pages(struct drm_i915_gem_object *obj)
do {
struct page *page;

GEM_BUG_ON(order > MAX_ORDER);
GEM_BUG_ON(order > MAX_PAGE_ORDER);
page = alloc_pages(GFP | __GFP_ZERO, order);
if (!page)
goto err;
Expand Down
8 changes: 4 additions & 4 deletions drivers/gpu/drm/ttm/tests/ttm_pool_test.c
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ static const struct ttm_pool_test_case ttm_pool_basic_cases[] = {
},
{
.description = "Above the allocation limit",
.order = MAX_ORDER + 1,
.order = MAX_PAGE_ORDER + 1,
},
{
.description = "One page, with coherent DMA mappings enabled",
Expand All @@ -118,7 +118,7 @@ static const struct ttm_pool_test_case ttm_pool_basic_cases[] = {
},
{
.description = "Above the allocation limit, with coherent DMA mappings enabled",
.order = MAX_ORDER + 1,
.order = MAX_PAGE_ORDER + 1,
.use_dma_alloc = true,
},
};
Expand Down Expand Up @@ -165,7 +165,7 @@ static void ttm_pool_alloc_basic(struct kunit *test)
fst_page = tt->pages[0];
last_page = tt->pages[tt->num_pages - 1];

if (params->order <= MAX_ORDER) {
if (params->order <= MAX_PAGE_ORDER) {
if (params->use_dma_alloc) {
KUNIT_ASSERT_NOT_NULL(test, (void *)fst_page->private);
KUNIT_ASSERT_NOT_NULL(test, (void *)last_page->private);
Expand All @@ -182,7 +182,7 @@ static void ttm_pool_alloc_basic(struct kunit *test)
* order 0 blocks
*/
KUNIT_ASSERT_EQ(test, fst_page->private,
min_t(unsigned int, MAX_ORDER,
min_t(unsigned int, MAX_PAGE_ORDER,
params->order));
KUNIT_ASSERT_EQ(test, last_page->private, 0);
}
Expand Down
Loading

0 comments on commit 5e0a760

Please sign in to comment.