- 26 Nov, 2015 1 commit
-
-
Alan Cox authored
case that the reservation contained "low", the starting position in the popmap for the free page search was incorrectly calculated. The most likely (and visible) symptom of this error was the assertion failure, "vm_reserv_reclaim_contig: pa is too low".
-
- 06 Aug, 2015 1 commit
-
-
Alan Cox authored
-
- 08 Jun, 2015 1 commit
-
-
Alan Cox authored
Differential Revision: https://reviews.freebsd.org/D2712 Reviewed by: kib Sponsored by: EMC / Isilon Storage Division
-
- 11 Apr, 2015 1 commit
-
-
Alan Cox authored
an infinite loop. Submitted by: Svatopluk Kraus MFC after: 1 week
-
- 14 Mar, 2015 1 commit
-
-
Ian Lepore authored
PR: 195668
-
- 12 Mar, 2015 1 commit
-
-
Ian Lepore authored
PR: 195668
-
- 08 Mar, 2015 1 commit
-
-
Konstantin Belousov authored
Sponsored by: The FreeBSD Foundation MFC after: 1 week
-
- 22 Nov, 2014 1 commit
-
-
Alan Cox authored
it instead of phys_avail[]. Discussed with: Svatopluk Kraus
-
- 10 Sep, 2014 1 commit
-
-
Alan Cox authored
isn't being allocated for the last of the requested pages, because a reservation won't fit in the gap between allocated pages, then the reservation structure shouldn't be initialized. While I'm here, improve the nearby comments. Reported by: jeff, pho MFC after: 1 week Sponsored by: EMC / Isilon Storage Division
-
- 11 Jun, 2014 1 commit
-
-
Alan Cox authored
machines. Specifically, there was a mismatch between how the routine allocation and deallocation operations accessed the population map and how the aggressively optimized reservation-breaking operation accessed it. So, problems only occurred when reservations were broken. This change makes the routine operations access the population map in the same way as the reservation breaking operation. This bug was introduced in r259999. PR: 187080 Tested by: jmg (on an "armeb" machine) Sponsored by: EMC / Isilon Storage Division
-
- 07 Jun, 2014 1 commit
-
-
Alan Cox authored
a partially populated reservation becomes fully populated, and decrease this field when a fully populated reservation becomes partially populated. Use this field to simplify the implementation of pmap_enter_object() on amd64, arm, and i386. On all architectures where we support superpages, the cost of creating a superpage mapping is roughly the same as creating a base page mapping. For example, both kinds of mappings entail the creation of a single PTE and PV entry. With this in mind, use the page size field to make the implementation of vm_map_pmap_enter(..., MAP_PREFAULT_PARTIAL) a little smarter. Previously, if MAP_PREFAULT_PARTIAL was specified to vm_map_pmap_enter(), that function would only map base pages. Now, it will create up to 96 base page or superpage mappings. Reviewed by: kib Sponsored by: EMC / Isilon Storage Division
-
- 29 Dec, 2013 1 commit
-
-
Alan Cox authored
page being allocated isn't already allocated. Sponsored by: EMC / Isilon Storage Division
-
- 28 Dec, 2013 1 commit
-
-
Alan Cox authored
Change the way that reservations keep track of which pages are in use. Instead of using the page's PG_CACHED and PG_FREE flags, maintain a bit vector within the reservation. This approach has a couple benefits. First, it makes breaking reservations much cheaper because there are fewer cache misses to identify the unused pages. Second, it is a pre- requisite for supporting two or more reservation sizes.
-
- 17 Sep, 2013 1 commit
-
-
Konstantin Belousov authored
longer abused to store pointer to slab. Remove it. Reviewed by: alc Sponsored by: The FreeBSD Foundation Approved by: re (hrs)
-
- 12 May, 2013 1 commit
-
-
Alan Cox authored
vm_page_insert() so that (1) vm_radix_lookup_le() is never called while the free page queues lock is held and (2) vm_radix_lookup_le() is called at most once. This change reduces the average time that the free page queues lock is held by vm_page_alloc() as well as vm_page_alloc()'s average overall running time. Sponsored by: EMC / Isilon Storage Division
-
- 18 Mar, 2013 1 commit
-
-
Attilio Rao authored
Replace the per-object resident and cached pages splay tree with a path-compressed multi-digit radix trie. Along with this, switch also the x86-specific handling of idle page tables to using the radix trie. This change is supposed to do the following: - Allowing the acquisition of read locking for lookup operations of the resident/cached pages collections as the per-vm_page_t splay iterators are now removed. - Increase the scalability of the operations on the page collections. The radix trie does rely on the consumers locking to ensure atomicity of its operations. In order to avoid deadlocks the bisection nodes are pre-allocated in the UMA zone. This can be done safely because the algorithm needs at maximum one new node per insert which means the maximum number of the desired nodes is the number of available physical frames themselves. However, not all the times a new bisection node is really needed. The radix trie implements path-compression because UFS indirect blocks can lead to several objects with a very sparse trie, increasing the number of levels to usually scan. It also helps in the nodes pre-fetching by introducing the single node per-insert property. This code is not generalized (yet) because of the possible loss of performance by having much of the sizes in play configurable. However, efforts to make this code more general and then reusable in further different consumers might be really done. The only KPI change is the removal of the function vm_page_splay() which is now reaped. The only KBI change, instead, is the removal of the left/right iterators from struct vm_page, which are now reaped. Further technical notes broken into mealpieces can be retrieved from the svn branch: http://svn.freebsd.org/base/user/attilio/vmcontention/ Sponsored by: EMC / Isilon storage division In collaboration with: alc, jeff Tested by: flo, pho, jhb, davide Tested by: ian (arm) Tested by: andreast (powerpc)
-
- 09 Mar, 2013 1 commit
-
-
Attilio Rao authored
future further optimizations where the vm_object lock will be held in read mode most of the time the page cache resident pool of pages are accessed for reading purposes. The change is mostly mechanical but few notes are reported: * The KPI changes as follow: - VM_OBJECT_LOCK() -> VM_OBJECT_WLOCK() - VM_OBJECT_TRYLOCK() -> VM_OBJECT_TRYWLOCK() - VM_OBJECT_UNLOCK() -> VM_OBJECT_WUNLOCK() - VM_OBJECT_LOCK_ASSERT(MA_OWNED) -> VM_OBJECT_ASSERT_WLOCKED() (in order to avoid visibility of implementation details) - The read-mode operations are added: VM_OBJECT_RLOCK(), VM_OBJECT_TRYRLOCK(), VM_OBJECT_RUNLOCK(), VM_OBJECT_ASSERT_RLOCKED(), VM_OBJECT_ASSERT_LOCKED() * The vm/vm_pager.h namespace pollution avoidance (forcing requiring sys/mutex.h in consumers directly to cater its inlining functions using VM_OBJECT_LOCK()) imposes that all the vm/vm_pager.h consumers now must include also sys/rwlock.h. * zfs requires a quite convoluted fix to include FreeBSD rwlocks into the compat layer because the name clash between FreeBSD and solaris versions must be avoided. At this purpose zfs redefines the vm_object locking functions directly, isolating the FreeBSD components in specific compat stubs. The KPI results heavilly broken by this commit. Thirdy part ports must be updated accordingly (I can think off-hand of VirtualBox, for example). Sponsored by: EMC / Isilon storage division Reviewed by: jeff Reviewed by: pjd (ZFS specific review) Discussed with: alc Tested by: pho
-
- 15 Jul, 2012 1 commit
-
-
Alan Cox authored
the last reservation of a multi-reservation allocation not being initialized.
-
- 08 Apr, 2012 1 commit
-
-
Alan Cox authored
that it will be freed to the cache pool rather than the default pool. Otherwise, the cached pages within the reservation may be recycled sooner than necessary. Reported by: Andrey Zonov
-
- 05 Dec, 2011 1 commit
-
-
Alan Cox authored
use superpage reservations. So, for the first time, kernel virtual memory that is allocated by contigmalloc(), kmem_alloc_attr(), and kmem_alloc_contig() can be promoted to superpages. In fact, even a series of small contigmalloc() allocations may collectively result in a promoted superpage. Eliminate some duplication of code in vm_reserv_alloc_page(). Change the type of vm_reserv_reclaim_contig()'s first parameter in order that it be consistent with other vm_*_contig() functions. Tested by: marius (sparc64)
-
- 30 Oct, 2011 1 commit
-
-
Alan Cox authored
eliminating duplicated code in the various pmap implementations. Micro-optimize vm_phys_free_pages(). Introduce vm_phys_free_contig(). It is fast routine for freeing an arbitrary number of physically contiguous pages. In particular, it doesn't require the number of pages to be a power of two. Use "u_long" instead of "unsigned long". Bruce Evans (bde@) has convinced me that the "boundary" parameters to kmem_alloc_contig(), vm_phys_alloc_contig(), and vm_reserv_reclaim_contig() should be of type "vm_paddr_t" and not "u_long". Make this change.
-
- 27 Jan, 2011 1 commit
-
-
Matthew D Fleming authored
sbuf_new_for_sysctl(9). This allows using an sbuf with a SYSCTL_OUT drain for extremely large amounts of data where the caller knows that appropriate references are held, and sleeping is not an issue. Inspired by: rwatson
-
- 19 Nov, 2010 1 commit
-
-
Max Laier authored
with only a single free page if that satisfies the requested size. MFC after: 3 days Reviewed by: alc
-
- 10 Nov, 2010 1 commit
-
-
Alan Cox authored
creation of large page mappings in the pmap, it can provide modest performance benefits. In particular, for a "buildworld" on a 2x 1GHz Ultrasparc IIIi it reduced the wall clock time by 2.2% and the system time by 12.6%. Tested by: marius@
-
- 30 Oct, 2010 1 commit
-
-
Alan Cox authored
MFC after: 1 week
-
- 19 Oct, 2010 1 commit
-
-
Jamie Gritton authored
by /etc/rc.d/jail.
-
- 16 Sep, 2010 1 commit
-
-
Matthew D Fleming authored
Add a drain function for struct sysctl_req, and use it for a variety of handlers, some of which had to do awkward things to get a large enough SBUF_FIXEDLEN buffer. Note that some sysctl handlers were explicitly outputting a trailing NUL byte. This behaviour was preserved, though it should not be necessary. Reviewed by: phk (original patch)
-
- 13 Sep, 2010 1 commit
-
-
Matthew D Fleming authored
unexpected things in copyout(9) and so wiring the user buffer is not sufficient to perform a copyout(9) while holding a random mutex. Requested by: nwhitehorn
-
- 09 Sep, 2010 1 commit
-
-
Matthew D Fleming authored
handlers, some of which had to do awkward things to get a large enough FIXEDLEN buffer. Note that some sysctl handlers were explicitly outputting a trailing NUL byte. This behaviour was preserved, though it should not be necessary. Reviewed by: phk
-
- 11 Apr, 2009 1 commit
-
-
Alan Cox authored
a reservation, unless all of the reservation's pages were free, the reservation was moved to the head of the partially-populated reservations queue, where it would be the next reservation to be broken in case the free page queues were emptied. Now, instead, I am moving it to the tail. Very likely this reservation is in the process of being freed in its entirety, so placing it at the tail of the queue makes it more likely that the underlying physical memory will be returned to the free page queues as one contiguous chunk. If a reservation must be broken, it will, instead, be the longest unchanged reservation, which is arguably the reservation that is least likely to ever achieve promotion or be freed in its entirety. MFC after: 6 weeks
-
- 19 Oct, 2008 1 commit
-
-
Ulf Lilleengen authored
-
- 06 Apr, 2008 1 commit
-
-
Alan Cox authored
contigmalloc(9) as a last resort to steal pages from an inactive, partially-used superpage reservation. Rename vm_reserv_reclaim() to vm_reserv_reclaim_inactive() and refactor it so that a separate subroutine is responsible for breaking the selected reservation. This subroutine is also used by vm_reserv_reclaim_contig().
-
- 29 Dec, 2007 1 commit
-
-
Alan Cox authored
machine-independent support for superpages. (The earlier part was the rewrite of the physical memory allocator.) The remainder of the code required for superpages support is machine-dependent and will be added to the various pmap implementations at a later date. Initially, I am only supporting one large page size per architecture. Moreover, I am only enabling the reservation system on amd64. (In an emergency, it can be disabled by setting VM_NRESERVLEVELS to 0 in amd64/include/vmparam.h or your kernel configuration file.)
-