1. 06 Aug, 2015 1 commit
  2. 08 Jun, 2015 1 commit
  3. 11 Apr, 2015 1 commit
  4. 14 Mar, 2015 1 commit
  5. 12 Mar, 2015 1 commit
  6. 08 Mar, 2015 1 commit
  7. 22 Nov, 2014 1 commit
  8. 10 Sep, 2014 1 commit
    • Alan Cox's avatar
      Fix a boundary case error in vm_reserv_alloc_contig(): If a reservation · 64f096ee
      Alan Cox authored
      isn't being allocated for the last of the requested pages, because a
      reservation won't fit in the gap between allocated pages, then the
      reservation structure shouldn't be initialized.
      
      While I'm here, improve the nearby comments.
      
      Reported by:	jeff, pho
      MFC after:	1 week
      Sponsored by:	EMC / Isilon Storage Division
      64f096ee
  9. 11 Jun, 2014 1 commit
    • Alan Cox's avatar
      Correct a bug in the management of the population map on big-endian · 3180f757
      Alan Cox authored
      machines.  Specifically, there was a mismatch between how the routine
      allocation and deallocation operations accessed the population map
      and how the aggressively optimized reservation-breaking operation
      accessed it.  So, problems only occurred when reservations were broken.
      This change makes the routine operations access the population map in
      the same way as the reservation breaking operation.
      
      This bug was introduced in r259999.
      
      PR:		187080
      Tested by:	jmg (on an "armeb" machine)
      Sponsored by:	EMC / Isilon Storage Division
      3180f757
  10. 07 Jun, 2014 1 commit
    • Alan Cox's avatar
      Add a page size field to struct vm_page. Increase the page size field when · dd05fa19
      Alan Cox authored
      a partially populated reservation becomes fully populated, and decrease this
      field when a fully populated reservation becomes partially populated.
      
      Use this field to simplify the implementation of pmap_enter_object() on
      amd64, arm, and i386.
      
      On all architectures where we support superpages, the cost of creating a
      superpage mapping is roughly the same as creating a base page mapping.  For
      example, both kinds of mappings entail the creation of a single PTE and PV
      entry.  With this in mind, use the page size field to make the
      implementation of vm_map_pmap_enter(..., MAP_PREFAULT_PARTIAL) a little
      smarter.  Previously, if MAP_PREFAULT_PARTIAL was specified to
      vm_map_pmap_enter(), that function would only map base pages.  Now, it will
      create up to 96 base page or superpage mappings.
      
      Reviewed by:	kib
      Sponsored by:	EMC / Isilon Storage Division
      dd05fa19
  11. 29 Dec, 2013 1 commit
  12. 28 Dec, 2013 1 commit
    • Alan Cox's avatar
      MFp4 alc_popmap · ec179322
      Alan Cox authored
        Change the way that reservations keep track of which pages are in use.
        Instead of using the page's PG_CACHED and PG_FREE flags, maintain a bit
        vector within the reservation.  This approach has a couple benefits.
        First, it makes breaking reservations much cheaper because there are
        fewer cache misses to identify the unused pages.  Second, it is a pre-
        requisite for supporting two or more reservation sizes.
      ec179322
  13. 17 Sep, 2013 1 commit
  14. 12 May, 2013 1 commit
    • Alan Cox's avatar
      Refactor vm_page_alloc()'s interactions with vm_reserv_alloc_page() and · 404eb1b3
      Alan Cox authored
      vm_page_insert() so that (1) vm_radix_lookup_le() is never called while the
      free page queues lock is held and (2) vm_radix_lookup_le() is called at most
      once.  This change reduces the average time that the free page queues lock
      is held by vm_page_alloc() as well as vm_page_alloc()'s average overall
      running time.
      
      Sponsored by:	EMC / Isilon Storage Division
      404eb1b3
  15. 18 Mar, 2013 1 commit
    • Attilio Rao's avatar
      Sync back vmcontention branch into HEAD: · 774d251d
      Attilio Rao authored
      Replace the per-object resident and cached pages splay tree with a
      path-compressed multi-digit radix trie.
      Along with this, switch also the x86-specific handling of idle page
      tables to using the radix trie.
      
      This change is supposed to do the following:
      - Allowing the acquisition of read locking for lookup operations of the
        resident/cached pages collections as the per-vm_page_t splay iterators
        are now removed.
      - Increase the scalability of the operations on the page collections.
      
      The radix trie does rely on the consumers locking to ensure atomicity of
      its operations.  In order to avoid deadlocks the bisection nodes are
      pre-allocated in the UMA zone.  This can be done safely because the
      algorithm needs at maximum one new node per insert which means the
      maximum number of the desired nodes is the number of available physical
      frames themselves.  However, not all the times a new bisection node is
      really needed.
      
      The radix trie implements path-compression because UFS indirect blocks
      can lead to several objects with a very sparse trie, increasing the number
      of levels to usually scan.  It also helps in the nodes pre-fetching by
      introducing the single node per-insert property.
      
      This code is not generalized (yet) because of the possible loss of
      performance by having much of the sizes in play configurable.
      However, efforts to make this code more general and then reusable in
      further different consumers might be really done.
      
      The only KPI change is the removal of the function vm_page_splay() which
      is now reaped.
      The only KBI change, instead, is the removal of the left/right iterators
      from struct vm_page, which are now reaped.
      
      Further technical notes broken into mealpieces can be retrieved from the
      svn branch:
      http://svn.freebsd.org/base/user/attilio/vmcontention/
      
      Sponsored by:	EMC / Isilon storage division
      In collaboration with:	alc, jeff
      Tested by:	flo, pho, jhb, davide
      Tested by:	ian (arm)
      Tested by:	andreast (powerpc)
      774d251d
  16. 09 Mar, 2013 1 commit
    • Attilio Rao's avatar
      Switch the vm_object mutex to be a rwlock. This will enable in the · 89f6b863
      Attilio Rao authored
      future further optimizations where the vm_object lock will be held
      in read mode most of the time the page cache resident pool of pages
      are accessed for reading purposes.
      
      The change is mostly mechanical but few notes are reported:
      * The KPI changes as follow:
        - VM_OBJECT_LOCK() -> VM_OBJECT_WLOCK()
        - VM_OBJECT_TRYLOCK() -> VM_OBJECT_TRYWLOCK()
        - VM_OBJECT_UNLOCK() -> VM_OBJECT_WUNLOCK()
        - VM_OBJECT_LOCK_ASSERT(MA_OWNED) -> VM_OBJECT_ASSERT_WLOCKED()
          (in order to avoid visibility of implementation details)
        - The read-mode operations are added:
          VM_OBJECT_RLOCK(), VM_OBJECT_TRYRLOCK(), VM_OBJECT_RUNLOCK(),
          VM_OBJECT_ASSERT_RLOCKED(), VM_OBJECT_ASSERT_LOCKED()
      * The vm/vm_pager.h namespace pollution avoidance (forcing requiring
        sys/mutex.h in consumers directly to cater its inlining functions
        using VM_OBJECT_LOCK()) imposes that all the vm/vm_pager.h
        consumers now must include also sys/rwlock.h.
      * zfs requires a quite convoluted fix to include FreeBSD rwlocks into
        the compat layer because the name clash between FreeBSD and solaris
        versions must be avoided.
        At this purpose zfs redefines the vm_object locking functions
        directly, isolating the FreeBSD components in specific compat stubs.
      
      The KPI results heavilly broken by this commit.  Thirdy part ports must
      be updated accordingly (I can think off-hand of VirtualBox, for example).
      
      Sponsored by:	EMC / Isilon storage division
      Reviewed by:	jeff
      Reviewed by:	pjd (ZFS specific review)
      Discussed with:	alc
      Tested by:	pho
      89f6b863
  17. 15 Jul, 2012 1 commit
  18. 08 Apr, 2012 1 commit
  19. 05 Dec, 2011 1 commit
    • Alan Cox's avatar
      Introduce vm_reserv_alloc_contig() and teach vm_page_alloc_contig() how to · c68c3537
      Alan Cox authored
      use superpage reservations.  So, for the first time, kernel virtual memory
      that is allocated by contigmalloc(), kmem_alloc_attr(), and
      kmem_alloc_contig() can be promoted to superpages.  In fact, even a series
      of small contigmalloc() allocations may collectively result in a promoted
      superpage.
      
      Eliminate some duplication of code in vm_reserv_alloc_page().
      
      Change the type of vm_reserv_reclaim_contig()'s first parameter in order
      that it be consistent with other vm_*_contig() functions.
      
      Tested by:	marius (sparc64)
      c68c3537
  20. 30 Oct, 2011 1 commit
    • Alan Cox's avatar
      Eliminate vm_phys_bootstrap_alloc(). It was a failed attempt at · 5c1f2cc4
      Alan Cox authored
      eliminating duplicated code in the various pmap implementations.
      
      Micro-optimize vm_phys_free_pages().
      
      Introduce vm_phys_free_contig().  It is fast routine for freeing an
      arbitrary number of physically contiguous pages.  In particular, it
      doesn't require the number of pages to be a power of two.
      
      Use "u_long" instead of "unsigned long".
      
      Bruce Evans (bde@) has convinced me that the "boundary" parameters
      to kmem_alloc_contig(), vm_phys_alloc_contig(), and
      vm_reserv_reclaim_contig() should be of type "vm_paddr_t" and not
      "u_long".  Make this change.
      5c1f2cc4
  21. 27 Jan, 2011 1 commit
  22. 19 Nov, 2010 1 commit
  23. 10 Nov, 2010 1 commit
  24. 30 Oct, 2010 1 commit
  25. 19 Oct, 2010 1 commit
  26. 16 Sep, 2010 1 commit
    • Matthew D Fleming's avatar
      Re-add r212370 now that the LOR in powerpc64 has been resolved: · 4e657159
      Matthew D Fleming authored
      Add a drain function for struct sysctl_req, and use it for a variety
      of handlers, some of which had to do awkward things to get a large
      enough SBUF_FIXEDLEN buffer.
      
      Note that some sysctl handlers were explicitly outputting a trailing
      NUL byte.  This behaviour was preserved, though it should not be
      necessary.
      
      Reviewed by:    phk (original patch)
      4e657159
  27. 13 Sep, 2010 1 commit
  28. 09 Sep, 2010 1 commit
  29. 11 Apr, 2009 1 commit
    • Alan Cox's avatar
      Previously, when vm_page_free_toq() was performed on a page belonging to · ab5378cf
      Alan Cox authored
      a reservation, unless all of the reservation's pages were free, the
      reservation was moved to the head of the partially-populated reservations
      queue, where it would be the next reservation to be broken in case the
      free page queues were emptied.  Now, instead, I am moving it to the tail.
      Very likely this reservation is in the process of being freed in its
      entirety, so placing it at the tail of the queue makes it more likely that
      the underlying physical memory will be returned to the free page queues as
      one contiguous chunk.  If a reservation must be broken, it will, instead,
      be the longest unchanged reservation, which is arguably the reservation
      that is least likely to ever achieve promotion or be freed in its entirety.
      
      MFC after:	6 weeks
      ab5378cf
  30. 19 Oct, 2008 1 commit
  31. 06 Apr, 2008 1 commit
    • Alan Cox's avatar
      Introduce vm_reserv_reclaim_contig(). This function is used by · 44aab2c3
      Alan Cox authored
      contigmalloc(9) as a last resort to steal pages from an inactive,
      partially-used superpage reservation.
      
      Rename vm_reserv_reclaim() to vm_reserv_reclaim_inactive() and
      refactor it so that a separate subroutine is responsible for breaking
      the selected reservation.  This subroutine is also used by
      vm_reserv_reclaim_contig().
      44aab2c3
  32. 29 Dec, 2007 1 commit
    • Alan Cox's avatar
      Add the superpage reservation system. This is "part 2 of 2" of the · f8a47341
      Alan Cox authored
      machine-independent support for superpages.  (The earlier part was
      the rewrite of the physical memory allocator.)  The remainder of the
      code required for superpages support is machine-dependent and will
      be added to the various pmap implementations at a later date.
      
      Initially, I am only supporting one large page size per architecture.
      Moreover, I am only enabling the reservation system on amd64.  (In
      an emergency, it can be disabled by setting VM_NRESERVLEVELS to 0
      in amd64/include/vmparam.h or your kernel configuration file.)
      f8a47341