Update contrib.
1 // Copyright (c) 1998-2009 Nokia Corporation and/or its subsidiary(-ies).
2 // All rights reserved.
3 // This component and the accompanying materials are made available
4 // under the terms of the License "Eclipse Public License v1.0"
5 // which accompanies this distribution, and is available
6 // at the URL "http://www.eclipse.org/legal/epl-v10.html".
8 // Initial Contributors:
9 // Nokia Corporation - initial contribution.
24 #define _USE_OLDEST_LISTS
29 #include <kern_priv.h>
37 A page information structure giving the current use and state for a
38 RAM page being managed by the kernel.
40 Any modification to the contents of any SPageInfo structure requires the
41 #MmuLock to be held. The exceptions to this is when a page is unused (#Type()==#EUnused),
42 in this case only the #RamAllocLock is required to use #SetAllocated(), #SetUncached(),
43 and #CacheInvalidateCounter().
45 These structures are stored in an array at the virtual address #KPageInfoLinearBase
46 which is indexed by the physical address of the page they are associated with, divided
47 by #KPageSize. The memory for this array is allocated by the bootstrap and it has
48 unallocated regions where no memory is required to store SPageInfo structures.
49 These unallocated memory regions are indicated by zeros in the bitmap stored at
55 Enumeration for the usage of a RAM page. This is stored in #iType.
60 No physical RAM exists for this page.
62 This represents memory which doesn't exist or is not part of the physical
63 address range being managed by the kernel.
68 RAM fixed at boot time.
70 This is for memory which was allocated by the bootstrap and which
71 the kernel does not actively manage.
78 The page is either free memory in Mmu::iRamPageAllocator or the demand
81 To change from or to this type the #RamAllocLock must be held.
86 Page is in an indeterminate state.
88 A page is placed into this state by Mmu::PagesAllocated when it is
89 allocated (ceases to be #EUnused). Once the page
94 Page was allocated with Mmu::AllocPhysicalRam, Mmu::ClaimPhysicalRam
95 or is part of a reserved RAM bank set at system boot.
100 Page is owned by a memory object.
102 #iOwner will point to the owning memory object and #iIndex will
103 be the page index into its memory for this page.
108 Page is being used as a shadow page.
117 Flags stored in #iFlags.
119 The least significant bits of these flags are used for the #TMemoryAttributes
124 // lower bits hold TMemoryAttribute value for this page
127 Flag set to indicate that the page has writable mappings.
128 (This is to facilitate demand paged memory.)
130 EWritable = 1<<(EMemoryAttributeShift),
133 Flag set to indicate that the memory page contents may be different
134 to those previously saved to backing store (contents are 'dirty').
135 This is set whenever a page gains a writeable mapping and only every
136 cleared once a demand paging memory manager 'cleans' the page.
138 EDirty = 1<<(EMemoryAttributeShift+1)
143 State for the page when being used to contain demand paged content.
148 Page is not being managed for demand paging purposes, is has been transiently
149 removed from the demand paging live list.
154 Page is in the live list as a young page.
159 Page is in the live list as an old page.
164 Page was pinned but it has been moved but not yet freed.
166 EPagedPinnedMoved = 0x3,
169 Page has been removed from live list to prevent contents being paged-out.
171 // NOTE - This must be the same value as EStatePagedLocked as defined in mmubase.h
174 #ifdef _USE_OLDEST_LISTS
176 Page is in the live list as one of oldest pages that is clean.
178 EPagedOldestClean = 0x5,
181 Page is in the live list as one of oldest pages that is dirty.
183 EPagedOldestDirty = 0x6
189 Additional flags, stored in #iFlags2.
194 When #iPagedState==#EPagedPinned this indicates the page is a 'reserved' page
195 and is does not increase free page count when returned to the live list.
197 EPinnedReserve = 1<<0,
202 Value from enum #TType, returned by #Type().
207 Bitmask of values from #TFlags, returned by #Flags().
212 Value from enum #TPagedState, returned by #PagedState().
217 Bitmask of values from #TFlags2.
224 The memory object which owns this page.
225 Used for always set for #EManaged pages and can be set for #PhysAlloc pages.
227 DMemoryObject* iOwner;
230 A pointer to the SPageInfo of the page that is being shadowed.
231 For use with #EShadow pages only.
233 SPageInfo* iOriginalPageInfo;
237 The index for this page within within the owning object's (#iOwner) memory.
242 Pointer identifying the current modifier of the page. See #SetModifier.
247 Storage location for data specific to the memory manager object handling this page.
248 See #SetPagingManagerData.
250 TUint32 iPagingManagerData;
253 Union of values which vary depending of the current value of #iType.
258 When #iType==#EPhysAlloc, this stores a count of the number of memory objects
259 this page has been added to.
264 When #iType==#EUnused, this stores the value of Mmu::iCacheInvalidateCounter
265 at the time the page was freed. This is used for some cache maintenance optimisations.
267 TUint32 iCacheInvalidateCounter;
270 When #iType==#EManaged, this holds the count of the number of times the page was pinned.
271 This will only be non-zero for demand paged memory.
278 Used for placing page into linked lists. E.g. the various demand paging live lists.
284 Return the SPageInfo for a given page of physical RAM.
286 static SPageInfo* FromPhysAddr(TPhysAddr aAddress);
289 Return physical address of the RAM page which this SPageInfo object is associated.
290 If the address has no SPageInfo, then a null pointer is returned.
292 static SPageInfo* SafeFromPhysAddr(TPhysAddr aAddress);
295 Return physical address of the RAM page which this SPageInfo object is associated.
297 FORCE_INLINE TPhysAddr PhysAddr();
300 Return a SPageInfo by conversion from the address of its embedded link member #iLink.
302 FORCE_INLINE static SPageInfo* FromLink(SDblQueLink* aLink)
304 return (SPageInfo*)((TInt)aLink-_FOFF(SPageInfo,iLink));
312 Return the current #TType value stored in #iType.
315 FORCE_INLINE TType Type()
322 Return the current value of #iFlags.
323 @pre #MmuLock held (if \a aNoCheck false).
325 FORCE_INLINE TUint Flags(TBool aNoCheck=false)
328 CheckAccess("Flags");
333 Return the current value of #iPagedState.
336 FORCE_INLINE TPagedState PagedState()
338 CheckAccess("PagedState");
339 return (TPagedState)iPagedState;
343 Return the current value of #iOwner.
346 FORCE_INLINE DMemoryObject* Owner()
348 CheckAccess("Owner");
353 Return the current value of #iIndex.
354 @pre #MmuLock held (if \a aNoCheck false).
356 FORCE_INLINE TUint32 Index(TBool aNoCheck=false)
359 CheckAccess("Index");
364 Return the current value of #iModifier.
365 @pre #MmuLock held (if \a aNoCheck false).
367 FORCE_INLINE TAny* Modifier()
369 CheckAccess("Modifier");
379 Set this page as type #EFixed.
380 This is only used during boot by Mmu::Init2Common.
382 inline void SetFixed(TUint32 aIndex=0)
384 CheckAccess("SetFixed");
385 Set(EFixed,0,aIndex);
389 Set this page as type #EUnused.
392 @pre #RamAllocLock held if previous page type != #EUnknown.
394 @post #iModifier==0 to indicate that page usage has changed.
396 inline void SetUnused()
398 CheckAccess("SetUnused",ECheckNotUnused|((iType!=EUnknown)?(TInt)ECheckRamAllocLock:0));
401 // do not modify iFlags or iIndex in this function because page allocating cache cleaning operations rely on using this value
405 Set this page as type #EUnknown.
406 This is only used by Mmu::PagesAllocated.
408 @pre #RamAllocLock held.
410 @post #iModifier==0 to indicate that page usage has changed.
412 inline void SetAllocated()
414 CheckAccess("SetAllocated",ECheckUnused|ECheckRamAllocLock|ENoCheckMmuLock);
417 // do not modify iFlags or iIndex in this function because cache cleaning operations rely on using this value
421 Set this page as type #EPhysAlloc.
422 @param aOwner Optional value for #iOwner.
423 @param aIndex Optional value for #iIndex.
427 @post #iModifier==0 to indicate that page usage has changed.
429 inline void SetPhysAlloc(DMemoryObject* aOwner=0, TUint32 aIndex=0)
431 CheckAccess("SetPhysAlloc");
432 Set(EPhysAlloc,aOwner,aIndex);
437 Set this page as type #EManaged.
439 @param aOwner Value for #iOwner.
440 @param aIndex Value for #iIndex.
441 @param aFlags Value for #iFlags (aOwner->PageInfoFlags()).
445 @post #iModifier==0 to indicate that page usage has changed.
447 inline void SetManaged(DMemoryObject* aOwner, TUint32 aIndex, TUint8 aFlags)
449 CheckAccess("SetManaged");
450 Set(EManaged,aOwner,aIndex);
456 Set this page as type #EShadow.
458 This is for use by #DShadowPage.
460 @param aIndex Value for #iIndex.
461 @param aFlags Value for #iFlags.
465 @post #iModifier==0 to indicate that page usage has changed.
467 inline void SetShadow(TUint32 aIndex, TUint8 aFlags)
469 CheckAccess("SetShadow");
470 Set(EShadow,0,aIndex);
475 Store a pointer to the SPageInfo of the page that this page is shadowing.
477 @param aOrigPageInfo Pointer to the SPageInfo that this page is shadowing
481 inline void SetOriginalPage(SPageInfo* aOrigPageInfo)
483 CheckAccess("SetOriginalPage");
484 __NK_ASSERT_DEBUG(iType == EShadow);
485 __NK_ASSERT_DEBUG(!iOriginalPageInfo);
486 iOriginalPageInfo = aOrigPageInfo;
490 Reutrns a pointer to the SPageInfo of the page that this page is shadowing.
492 @return A pointer to the SPageInfo that this page is shadowing
496 inline SPageInfo* GetOriginalPage()
498 CheckAccess("GetOriginalPage");
499 __NK_ASSERT_DEBUG(iType == EShadow);
500 __NK_ASSERT_DEBUG(iOriginalPageInfo);
501 return iOriginalPageInfo;
506 /** Internal implementation factor for methods which set page type. */
507 FORCE_INLINE void Set(TType aType, DMemoryObject* aOwner, TUint32 aIndex)
509 CheckAccess("Set",ECheckNotAllocated|ECheckNotPaged);
510 (TUint32&)iType = aType; // also clears iFlags, iFlags2 and iPagedState
524 Set #iFlags to indicate that the contents of this page have been removed from
527 @pre #MmuLock held if #iType!=#EUnused, #RamAllocLock held if #iType==#EUnused.
529 FORCE_INLINE void SetUncached()
531 CheckAccess("SetUncached",iType==EUnused ? ECheckRamAllocLock|ENoCheckMmuLock : 0);
532 __NK_ASSERT_DEBUG(iType==EUnused || (iType==EPhysAlloc && iUseCount==0));
533 iFlags = EMemAttNormalUncached;
537 Set memory attributes and colour for a page of type #EPhysAlloc.
539 This is set the first time a page of type #EPhysAlloc is added to a memory
540 object with DMemoryManager::AddPages or DMemoryManager::AddContiguous.
541 The set values are used to check constraints are met if the page is
542 also added to other memory objects.
544 @param aIndex The page index within a memory object at which this page
545 has been added. This is stored in #iIndex and used to determine
547 @param aFlags Value for #iFlags. This sets the memory attributes for the page.
549 @post #iModifier==0 to indicate that page usage has changed.
551 inline void SetMapped(TUint32 aIndex, TUint aFlags)
553 CheckAccess("SetMapped");
554 __NK_ASSERT_DEBUG(iType==EPhysAlloc);
555 __NK_ASSERT_DEBUG(iUseCount==0); // check page not already added to an object
566 @post #iModifier==0 to indicate that page state has changed.
568 FORCE_INLINE void SetPagedState(TPagedState aPagedState)
570 CheckAccess("SetPagedState");
571 __NK_ASSERT_DEBUG(aPagedState==iPagedState || iPagedState!=EPagedPinned || iPinCount==0); // make sure don't set an unpinned state if iPinCount!=0
572 iPagedState = aPagedState;
577 The the pages #iModifier value.
579 #iModifier is cleared to zero whenever the usage or paging state of the page
580 changes. So if a thread sets this to a suitable unique value (e.g. the address
581 of a local variable) then it may perform a long running operation on the page
582 and later check with #CheckModified that no other thread has changed the page
583 state or used SetModifier in the intervening time.
587 TInt anyLocalVariable; // arbitrary local variable
590 SPageInfo* thePageInfo = GetAPage();
591 thePageInfo->SetModifier(&anyLocalVariable); // use &anyLocalVariable as value unique to this thread
594 DoOperation(thePageInfo);
598 if(!thePageInfo->CheckModified(&anyLocalVariable));
600 // nobody else touched the page...
601 OperationSucceeded(thePageInfo);
606 // somebody else changed our page...
607 OperationInterrupted(thePageInfo);
617 FORCE_INLINE void SetModifier(TAny* aModifier)
619 CheckAccess("SetModifier");
620 iModifier = aModifier;
624 Return true if the #iModifier value does not match a specified value.
626 @param aModifier A 'modifier' value previously set with #SetModifier.
632 FORCE_INLINE TBool CheckModified(TAny* aModifier)
634 CheckAccess("CheckModified");
635 return iModifier!=aModifier;
639 Flag this page as having Page Table Entries which give writeable access permissions.
640 This sets flags #EWritable and #EDirty.
644 FORCE_INLINE void SetWritable()
646 CheckAccess("SetWritable");
647 // This should only be invoked on paged pages.
648 __NK_ASSERT_DEBUG(PagedState() != EUnpaged);
654 Flag this page as having no longer having any Page Table Entries which give writeable
656 This clears the flag #EWritable.
660 FORCE_INLINE void SetReadOnly()
662 CheckAccess("SetReadOnly");
663 iFlags &= ~EWritable;
667 Returns true if #SetWritable has been called without a subsequent #SetReadOnly.
668 This returns the flag #EWritable.
672 FORCE_INLINE TBool IsWritable()
674 CheckAccess("IsWritable");
675 return iFlags&EWritable;
679 Flag this page as 'dirty', indicating that its contents may no longer match those saved
680 to a backing store. This sets the flag #EWritable.
682 This is used in the management of demand paged memory.
686 FORCE_INLINE void SetDirty()
688 CheckAccess("SetDirty");
693 Flag this page as 'clean', indicating that its contents now match those saved
694 to a backing store. This clears the flag #EWritable.
696 This is used in the management of demand paged memory.
700 FORCE_INLINE void SetClean()
702 CheckAccess("SetClean");
707 Return the #EDirty flag. See #SetDirty and #SetClean.
709 This is used in the management of demand paged memory.
713 FORCE_INLINE TBool IsDirty()
715 CheckAccess("IsDirty");
716 return iFlags&EDirty;
725 Set #iCacheInvalidateCounter to the specified value.
728 @pre #iType==#EUnused.
730 void SetCacheInvalidateCounter(TUint32 aCacheInvalidateCounter)
732 CheckAccess("SetCacheInvalidateCounter");
733 __NK_ASSERT_DEBUG(iType==EUnused);
734 iCacheInvalidateCounter = aCacheInvalidateCounter;
738 Return #iCacheInvalidateCounter.
741 @pre #iType==#EUnused.
743 TUint32 CacheInvalidateCounter()
745 CheckAccess("CacheInvalidateCounter",ECheckRamAllocLock|ENoCheckMmuLock);
746 __NK_ASSERT_DEBUG(iType==EUnused);
747 return iCacheInvalidateCounter;
751 Increment #iUseCount to indicate that the page has been added to a memory object.
753 @return New value of #iUseCount.
756 @pre #iType==#EPhysAlloc.
758 TUint32 IncUseCount()
760 CheckAccess("IncUseCount");
761 __NK_ASSERT_DEBUG(iType==EPhysAlloc);
766 Decrement #iUseCount to indicate that the page has been removed from a memory object.
768 @return New value of #iUseCount.
771 @pre #iType==#EPhysAlloc.
773 TUint32 DecUseCount()
775 CheckAccess("DecUseCount");
776 __NK_ASSERT_DEBUG(iType==EPhysAlloc);
777 __NK_ASSERT_DEBUG(iUseCount);
782 Return #iUseCount, this indicates the number of times the page has been added to memory object(s).
787 @pre #iType==#EPhysAlloc.
791 CheckAccess("UseCount");
792 __NK_ASSERT_DEBUG(iType==EPhysAlloc);
797 Increment #iPinCount to indicate that a mapping has pinned this page.
798 This is only done for demand paged memory; unpaged memory does not have
799 #iPinCount updated when it is pinned.
801 @return New value of #iPinCount.
804 @pre #iType==#EManaged.
806 TUint32 IncPinCount()
808 CheckAccess("IncPinCount");
809 __NK_ASSERT_DEBUG(iType==EManaged);
814 Decrement #iPinCount to indicate that a mapping which was pinning this page has been removed.
815 This is only done for demand paged memory; unpaged memory does not have
816 #iPinCount updated when it is unpinned.
818 @return New value of #iPinCount.
821 @pre #iType==#EManaged.
823 TUint32 DecPinCount()
825 CheckAccess("DecPinCount");
826 __NK_ASSERT_DEBUG(iType==EManaged);
827 __NK_ASSERT_DEBUG(iPinCount);
832 Clear #iPinCount to zero as this page is no longer being used for the
834 This is only done for demand paged memory; unpaged memory does not have
838 @pre #iType==#EManaged.
842 CheckAccess("ClearPinCount");
843 __NK_ASSERT_DEBUG(iType==EManaged);
844 __NK_ASSERT_DEBUG(iPinCount);
849 Return #iPinCount which indicates the number of mappings that have pinned this page.
850 This is only valid for demand paged memory; unpaged memory does not have
851 #iPinCount updated when it is pinned.
856 @pre #iType==#EManaged.
860 CheckAccess("PinCount");
861 __NK_ASSERT_DEBUG(iType==EManaged);
866 Set the #EPinnedReserve flag.
870 void SetPinnedReserve()
872 CheckAccess("SetPinnedReserve");
873 iFlags2 |= EPinnedReserve;
877 Clear the #EPinnedReserve flag.
881 TBool ClearPinnedReserve()
883 CheckAccess("ClearPinnedReserve");
884 TUint oldFlags2 = iFlags2;
885 iFlags2 = oldFlags2&~EPinnedReserve;
886 return oldFlags2&EPinnedReserve;
890 Set #iPagingManagerData to the specified value.
892 @pre #iType==#EManaged.
894 void SetPagingManagerData(TUint32 aPagingManagerData)
896 CheckAccess("SetPagingManagerData");
897 __NK_ASSERT_DEBUG(iType==EManaged);
898 iPagingManagerData = aPagingManagerData;
902 Return #iPagingManagerData.
904 @pre #iType==#EManaged.
906 TUint32 PagingManagerData()
908 CheckAccess("PagingManagerData");
909 __NK_ASSERT_DEBUG(iType==EManaged);
910 return iPagingManagerData;
920 ECheckNotAllocated = 1<<0,
921 ECheckNotUnused = 1<<1,
923 ECheckNotPaged = 1<<3,
924 ECheckRamAllocLock = 1<<4,
925 ENoCheckMmuLock = 1<<5
928 void CheckAccess(const char* aMessage, TUint aFlags=0);
930 FORCE_INLINE void CheckAccess(const char* /*aMessage*/, TUint /*aFlags*/=0)
937 Debug function which outputs the contents of this object to the kernel debug port.
941 FORCE_INLINE void Dump()
947 const TInt KPageInfosPerPageShift = KPageShift-KPageInfoShift;
948 const TInt KPageInfosPerPage = 1<<KPageInfosPerPageShift;
949 const TInt KNumPageInfoPagesShift = 32-KPageShift-KPageInfosPerPageShift;
950 const TInt KNumPageInfoPages = 1<<KNumPageInfoPagesShift;
952 FORCE_INLINE SPageInfo* SPageInfo::FromPhysAddr(TPhysAddr aAddress)
954 return ((SPageInfo*)KPageInfoLinearBase)+(aAddress>>KPageShift);
957 FORCE_INLINE TPhysAddr SPageInfo::PhysAddr()
959 return ((TPhysAddr)this)<<KPageInfosPerPageShift;
965 A page table information structure giving the current use and state for a
968 struct SPageTableInfo
973 Enumeration for the usage of a page table. This is stored in #iType.
978 Page table is unused (implementation assumes this enumeration == 0).
979 @see #iUnused and #SPageTableInfo::TUnused.
984 Page table has undetermined use.
985 (Either created by the bootstrap or is newly allocated but not yet assigned.)
990 Page table is being used by a coarse memory object.
991 @see #iCoarse and #SPageTableInfo::TCoarse.
996 Page table is being used for fine mappings.
997 @see #iFine and #SPageTableInfo::TFine.
1005 Flags stored in #iFlags.
1010 Page table if for mapping demand paged content.
1012 EDemandPaged = 1<<0,
1014 Page table is in Page Table Allocator's cleanup list
1015 (only set for first page table in a RAM page)
1017 EOnCleanupList = 1<<1,
1019 The page table cluster that this page table info refers to is currently allocated.
1021 EPtClusterAllocated = 1<<2
1025 Value from enum #TType.
1030 Bitmask of values from #TFlags.
1035 Spare member used for padding.
1040 Number of pages currently mapped by this page table.
1041 Normally, when #iPageCount==0 and #iPermanenceCount==0, the page table is freed.
1046 Count for the number of uses of this page table which require it to be permanently allocated;
1047 even when it maps no pages (#iPageCount==0).
1049 TUint16 iPermanenceCount;
1052 Information about a page table when #iType==#EUnused.
1057 Cast this object to a SDblQueLink reference.
1058 This is used for placing unused SPageTableInfo objects into free lists.
1060 FORCE_INLINE SDblQueLink& Link()
1061 { return *(SDblQueLink*)this; }
1063 SDblQueLink* iNext; ///< Next free page table
1064 SDblQueLink* iPrev; ///< Previous free page table
1068 Information about a page table when #iType==#ECoarseMapping.
1073 Memory object which owns this page table.
1075 DCoarseMemory* iMemoryObject;
1078 The index of the page table, i.e. the offset, in 'chunks',
1079 into the object's memory that the page table is being used to map.
1081 TUint16 iChunkIndex;
1084 The #TPteType the page table is being used for.
1090 Information about a page table when #iType==#EFineMapping.
1095 Start of the virtual address region that this page table is currently
1096 mapping memory at, ORed with the OS ASID of the address space this lies in.
1098 TLinAddr iLinAddrAndOsAsid;
1102 Union of type specific info.
1106 TUnused iUnused; ///< Information about a page table when #iType==#EUnused.
1107 TCoarse iCoarse; ///< Information about a page table when #iType==#ECoarseMapping.
1108 TFine iFine; ///< Information about a page table when #iType==#EFineMapping.
1113 Return the SPageTableInfo for the page table in which a given PTE lies.
1115 static SPageTableInfo* FromPtPtr(TPte* aPtPte);
1118 Return the page table with which this SPageTableInfo is associated.
1123 Used at boot time to initialise page tables which were allocated by the bootstrap.
1125 @param aCount The number of pages being mapped by this page table.
1127 FORCE_INLINE void Boot(TUint aCount)
1130 iPageCount = aCount;
1131 iPermanenceCount = 1; // assume page table shouldn't be freed
1133 iFlags = EPtClusterAllocated;
1137 Initialise a page table after it has had memory allocated for it.
1139 @param aDemandPaged True if this page table has been allocated for use with
1140 demand paged memory.
1142 FORCE_INLINE void New(TBool aDemandPaged)
1145 iFlags = EPtClusterAllocated | (aDemandPaged ? EDemandPaged : 0);
1149 Return true if the page table cluster that this page table info refers to has
1150 been previously allocated.
1152 FORCE_INLINE TBool IsPtClusterAllocated()
1154 return iFlags & EPtClusterAllocated;
1158 The page table cluster that this page table info refers to has been freed.
1160 FORCE_INLINE void PtClusterFreed()
1162 __NK_ASSERT_DEBUG(IsPtClusterAllocated());
1163 iFlags &= ~EPtClusterAllocated;
1167 The page table cluster that this page table info refers to has been allocated.
1169 FORCE_INLINE void PtClusterAlloc()
1171 __NK_ASSERT_DEBUG(!IsPtClusterAllocated());
1172 iFlags |= EPtClusterAllocated;
1176 Initialilse a page table to type #EUnknown after it has been newly allocated.
1178 @pre #PageTablesLockIsHeld.
1180 FORCE_INLINE void Init()
1182 __NK_ASSERT_DEBUG(IsPtClusterAllocated());
1185 iPermanenceCount = 0;
1190 Increment #iPageCount to account for newly mapped pages.
1192 @param aStep Amount to add to #iPageCount. Default is one.
1194 @return New value of #iPageCount.
1198 FORCE_INLINE TUint IncPageCount(TUint aStep=1)
1200 CheckAccess("IncPageCount");
1201 TUint count = iPageCount; // compiler handles half-word values stupidly, so give it a hand
1208 Decrement #iPageCount to account for removed pages.
1210 @param aStep Amount to subtract from #iPageCount. Default is one.
1212 @return New value of #iPageCount.
1216 FORCE_INLINE TUint DecPageCount(TUint aStep=1)
1218 CheckAccess("DecPageCount");
1219 TUint count = iPageCount; // compiler handles half-word values stupidly, so give it a hand
1229 FORCE_INLINE TUint PageCount()
1231 CheckAccess("PageCount");
1236 Increment #iPermanenceCount to indicate a new use of this page table which
1237 requires it to be permanently allocated.
1239 @return New value of #iPermanenceCount.
1243 FORCE_INLINE TUint IncPermanenceCount()
1245 CheckAccess("IncPermanenceCount");
1246 TUint count = iPermanenceCount; // compiler handles half-word values stupidly, so give it a hand
1248 iPermanenceCount = count;
1253 Decrement #iPermanenceCount to indicate the removal of a use added by #IncPermanenceCount.
1255 @return New value of #iPermanenceCount.
1259 FORCE_INLINE TUint DecPermanenceCount()
1261 CheckAccess("DecPermanenceCount");
1262 TUint count = iPermanenceCount; // compiler handles half-word values stupidly, so give it a hand
1263 __NK_ASSERT_DEBUG(count);
1265 iPermanenceCount = count;
1270 Return #iPermanenceCount.
1274 FORCE_INLINE TUint PermanenceCount()
1276 CheckAccess("PermanenceCount");
1277 return iPermanenceCount;
1281 Set page table to the #EUnused state.
1282 This is only intended for use by #PageTableAllocator.
1284 @pre #MmuLock held and #PageTablesLockIsHeld.
1286 FORCE_INLINE void SetUnused()
1288 CheckChangeUse("SetUnused");
1293 Return true if the page table is in the #EUnused state.
1294 This is only intended for use by #PageTableAllocator.
1296 @pre #MmuLock held or #PageTablesLockIsHeld.
1298 FORCE_INLINE TBool IsUnused()
1300 CheckCheckUse("IsUnused");
1301 return iType==EUnused;
1305 Set page table as being used by a coarse memory object.
1307 @param aMemory Memory object which owns this page table.
1308 @param aChunkIndex The index of the page table, i.e. the offset, in 'chunks',
1309 into the object's memory that the page table is being used to map.
1310 @param aPteType The #TPteType the page table is being used for.
1312 @pre #MmuLock held and #PageTablesLockIsHeld.
1316 inline void SetCoarse(DCoarseMemory* aMemory, TUint aChunkIndex, TUint aPteType)
1318 CheckChangeUse("SetCoarse");
1320 iPermanenceCount = 0;
1321 iType = ECoarseMapping;
1322 iCoarse.iMemoryObject = aMemory;
1323 iCoarse.iChunkIndex = aChunkIndex;
1324 iCoarse.iPteType = aPteType;
1328 Return true if this page table is currently being used by a coarse memory object
1329 matching the specified arguments.
1330 For arguments, see #SetCoarse.
1332 @pre #MmuLock held or #PageTablesLockIsHeld.
1334 inline TBool CheckCoarse(DCoarseMemory* aMemory, TUint aChunkIndex, TUint aPteType)
1336 CheckCheckUse("CheckCoarse");
1337 return iType==ECoarseMapping
1338 && iCoarse.iMemoryObject==aMemory
1339 && iCoarse.iChunkIndex==aChunkIndex
1340 && iCoarse.iPteType==aPteType;
1344 Set page table as being used for fine mappings.
1346 @param aLinAddr Start of the virtual address region that the page table is
1348 @param aOsAsid The OS ASID of the address space which \a aLinAddr lies in.
1350 @pre #MmuLock held and #PageTablesLockIsHeld.
1352 inline void SetFine(TLinAddr aLinAddr, TUint aOsAsid)
1354 CheckChangeUse("SetFine");
1355 __NK_ASSERT_DEBUG((aLinAddr&KPageMask)==0);
1357 iPermanenceCount = 0;
1358 iType = EFineMapping;
1359 iFine.iLinAddrAndOsAsid = aLinAddr|aOsAsid;
1363 Return true if this page table is currently being used for fine mappings
1364 matching the specified arguments.
1365 For arguments, see #SetFine.
1367 @pre #MmuLock held or #PageTablesLockIsHeld.
1369 inline TBool CheckFine(TLinAddr aLinAddr, TUint aOsAsid)
1371 CheckCheckUse("CheckFine");
1372 __NK_ASSERT_DEBUG((aLinAddr&KPageMask)==0);
1373 return iType==EFineMapping
1374 && iFine.iLinAddrAndOsAsid==(aLinAddr|aOsAsid);
1378 Set a previously unknown page table as now being used for fine mappings.
1379 This is used during the boot process by DFineMemory::ClaimInitialPages
1380 to initialise the state of a page table allocated by the bootstrap.
1382 @param aLinAddr Start of the virtual address region that the page table is
1384 @param aOsAsid The OS ASID of the address space which \a aLinAddr lies in.
1385 (This should be KKernelOsAsid.)
1387 @pre #MmuLock held and #PageTablesLockIsHeld.
1389 inline TBool ClaimFine(TLinAddr aLinAddr, TUint aOsAsid)
1391 CheckChangeUse("ClaimFine");
1392 __NK_ASSERT_DEBUG((aLinAddr&KPageMask)==0);
1393 if(iType==EFineMapping)
1394 return CheckFine(aLinAddr,aOsAsid);
1397 iType = EFineMapping;
1398 iFine.iLinAddrAndOsAsid = aLinAddr|aOsAsid;
1403 Return true if page table was allocated for use with demand paged memory.
1405 FORCE_INLINE TBool IsDemandPaged()
1407 return iFlags&EDemandPaged;
1412 Debug check returning true if the value of #iPageCount is consistent with
1413 the PTEs in this page table.
1417 TBool CheckPageCount();
1421 Return a reference to an embedded SDblQueLink which is used for placing this
1422 SPageTableInfo objects into free lists.
1423 @pre #PageTablesLockIsHeld.
1424 @pre #iType==#EUnused.
1426 inline SDblQueLink& FreeLink()
1428 __NK_ASSERT_DEBUG(IsUnused());
1429 return iUnused.Link();
1433 Return a pointer to a SPageTableInfo by conversion from the address
1434 of its embedded link as returned by #FreeLink.
1436 FORCE_INLINE static SPageTableInfo* FromFreeLink(SDblQueLink* aLink)
1438 return (SPageTableInfo*)((TInt)aLink-_FOFF(SPageTableInfo,iUnused));
1442 Return the SPageTableInfo for the first page table in the same
1443 physical ram page as the page table for this SPageTableInfo.
1445 FORCE_INLINE SPageTableInfo* FirstInPage()
1447 return (SPageTableInfo*)(TLinAddr(this)&~(KPtClusterMask*sizeof(SPageTableInfo)));
1451 Return the SPageTableInfo for the last page table in the same
1452 physical ram page as the page table for this SPageTableInfo.
1454 FORCE_INLINE SPageTableInfo* LastInPage()
1456 return (SPageTableInfo*)(TLinAddr(this)|(KPtClusterMask*sizeof(SPageTableInfo)));
1460 Return true if the page table for this SPageTableInfo is
1461 the first page table in the physical page it occupies.
1463 FORCE_INLINE TBool IsFirstInPage()
1465 return (TLinAddr(this)&(KPtClusterMask*sizeof(SPageTableInfo)))==0;
1469 Return true if this page table has been added to the cleanup list with
1471 Must only be used for page tables which return true for #IsFirstInPage.
1473 @pre #PageTablesLockIsHeld.
1475 FORCE_INLINE TBool IsOnCleanupList()
1477 __NK_ASSERT_DEBUG(IsFirstInPage());
1478 return iFlags&EOnCleanupList;
1482 Add the RAM page containing this page table to the specified cleanup list.
1483 Must only be used for page tables which return true for #IsFirstInPage.
1485 @pre #PageTablesLockIsHeld.
1487 FORCE_INLINE void AddToCleanupList(SDblQue& aCleanupList)
1489 __NK_ASSERT_DEBUG(IsUnused());
1490 __NK_ASSERT_DEBUG(IsFirstInPage());
1491 __NK_ASSERT_DEBUG(!IsOnCleanupList());
1492 aCleanupList.Add(&FreeLink());
1493 iFlags |= EOnCleanupList;
1497 Remove the RAM page containing this page table from a cleanup list it
1498 was added to with aCleanupList.
1499 Must only be used for page tables which return true for #IsFirstInPage.
1501 @pre #PageTablesLockIsHeld.
1503 FORCE_INLINE void RemoveFromCleanupList()
1505 __NK_ASSERT_DEBUG(IsUnused());
1506 __NK_ASSERT_DEBUG(IsFirstInPage());
1507 __NK_ASSERT_DEBUG(IsOnCleanupList());
1508 iFlags &= ~EOnCleanupList;
1513 Remove this page table from its owner and free it.
1514 This is only used with page tables which map demand paged memory
1515 and is intended for use in implementing #DPageTableMemoryManager.
1517 @return KErrNone if successful,
1518 otherwise one of the system wide error codes.
1520 @pre #MmuLock held and #PageTablesLockIsHeld.
1527 void CheckChangeUse(const char* aName);
1528 void CheckCheckUse(const char* aName);
1529 void CheckAccess(const char* aName);
1530 void CheckInit(const char* aName);
1532 FORCE_INLINE void CheckChangeUse(const char* /*aName*/)
1534 FORCE_INLINE void CheckCheckUse(const char* /*aName*/)
1536 FORCE_INLINE void CheckAccess(const char* /*aName*/)
1538 FORCE_INLINE void CheckInit(const char* /*aName*/)
1544 const TInt KPageTableInfoShift = 4;
1545 __ASSERT_COMPILE(sizeof(SPageTableInfo)==(1<<KPageTableInfoShift));
1547 FORCE_INLINE SPageTableInfo* SPageTableInfo::FromPtPtr(TPte* aPtPte)
1549 TUint id = ((TLinAddr)aPtPte-KPageTableBase)>>KPageTableShift;
1550 return (SPageTableInfo*)KPageTableInfoBase+id;
1553 FORCE_INLINE TPte* SPageTableInfo::PageTable()
1558 ((TLinAddr)this-(TLinAddr)KPageTableInfoBase)
1559 <<(KPageTableShift-KPageTableInfoShift)
1567 Class providing access to the mutex used to protect memory allocation operations;
1568 this is the mutex Mmu::iRamAllocatorMutex.
1569 In addition to providing locking, these functions monitor the system's free RAM
1570 levels and call K::CheckFreeMemoryLevel to notify the system of changes.
1577 The lock may be acquired multiple times by a thread, and will remain locked
1578 until #Unlock has been used enough times to balance this.
1585 @pre The current thread has previously acquired the lock.
1587 static void Unlock();
1590 Allow another thread to acquire the lock.
1591 This is equivalent to #Unlock followed by #Lock, but optimised
1592 to only do this if there is another thread waiting on the lock.
1594 @return True if the lock was released by this function.
1596 @pre The current thread has previously acquired the lock.
1598 static TBool Flash();
1601 Return true if the current thread holds the lock.
1602 This is used for debug checks.
1604 static TBool IsHeld();
1610 Return true if the PageTableLock is held by the current thread.
1611 This lock is the mutex used to protect page table allocation; it is acquired
1614 ::PageTables.Lock();
1618 ::PageTables.Unlock();
1621 TBool PageTablesLockIsHeld();
1626 Class providing access to the fast mutex used to protect various
1627 low level memory operations.
1629 This lock must only be held for a very short and bounded time.
1642 @pre The current thread has previously acquired the lock.
1644 static void Unlock();
1647 Allow another thread to acquire the lock.
1648 This is equivalent to #Unlock followed by #Lock, but optimised
1649 to only do this if there is another thread waiting on the lock.
1651 @return True if the lock was released by this function.
1653 @pre The current thread has previously acquired the lock.
1655 static TBool Flash();
1658 Return true if the current thread holds the lock.
1659 This is used for debug checks.
1661 static TBool IsHeld();
1664 Increment a counter and perform the action of #Flash() once a given threshold
1665 value is reached. After flashing the counter is reset.
1667 This is typically used in long running loops to periodically flash the lock
1668 and so avoid holding it for too long, e.g.
1673 const TUint KMaxInterationsWithLock = 10;
1677 MmuLock::Flash(flash,KMaxInterationsWithLock); // flash every N loops
1682 @param aCounter Reference to the counter.
1683 @param aFlashThreshold Value \a aCounter must reach before flashing the lock.
1684 @param aStep Value to add to \a aCounter.
1686 @return True if the lock was released by this function.
1688 @pre The current thread has previously acquired the lock.
1690 static FORCE_INLINE TBool Flash(TUint& aCounter, TUint aFlashThreshold, TUint aStep=1)
1693 if((aCounter+=aStep)<aFlashThreshold)
1695 aCounter -= aFlashThreshold;
1696 return MmuLock::Flash();
1700 Begin a debug check to test that the MmuLock is not unlocked unexpectedly.
1702 This is used in situations where a series of operation must be performed
1703 atomically with the MmuLock held. It is usually used via the
1704 #__UNLOCK_GUARD_START macro, e.g.
1707 __UNLOCK_GUARD_START(MmuLock);
1710 __UNLOCK_GUARD_END(MmuLock); // fault if MmuLock released by SomeCode or SomeMoreCode
1713 static FORCE_INLINE void UnlockGuardStart()
1721 End a debug check testing that the MmuLock is not unlocked unexpectedly.
1722 This is usually used via the #__UNLOCK_GUARD_END which faults if true is returned.
1724 @see UnlockGuardStart
1726 @return True if the MmuLock was released between a previous #UnlockGuardStart
1727 and the call this function.
1729 static FORCE_INLINE TBool UnlockGuardEnd()
1732 __NK_ASSERT_DEBUG(UnlockGuardNest);
1734 return UnlockGuardFail==0;
1742 Exectued whenever the lock is released to check that
1743 #UnlockGuardStart and #UnlockGuardEnd are balanced.
1745 static FORCE_INLINE void UnlockGuardCheck()
1749 UnlockGuardFail = true;
1755 static NFastMutex iLock;
1758 static TUint UnlockGuardNest;
1759 static TUint UnlockGuardFail;
1766 Interface for accessing the lock mutex being used to serialise
1767 explicit modifications to a specified memory object.
1769 The lock mutex is either the one which was previously assigned with
1770 DMemoryObject::SetLock. Or, if none was set, a dynamically assigned
1771 mutex from #MemoryObjectMutexPool will be of 'order' #KMutexOrdMemoryObject.
1773 class MemoryObjectLock
1777 Acquire the lock for the specified memory object.
1778 If the object has no lock, one is assigned from #MemoryObjectMutexPool.
1780 static void Lock(DMemoryObject* aMemory);
1783 Release the lock for the specified memory object, which was acquired
1784 with #Lock. If the lock was one which was dynamically assigned, and there
1785 are no threads waiting for it, the the lock is unassigned from the memory
1788 static void Unlock(DMemoryObject* aMemory);
1791 Return true if the current thread holds lock for the specified memory object.
1792 This is used for debug checks.
1794 static TBool IsHeld(DMemoryObject* aMemory);
1798 #define __UNLOCK_GUARD_START(_l) __DEBUG_ONLY(_l::UnlockGuardStart())
1799 #define __UNLOCK_GUARD_END(_l) __NK_ASSERT_DEBUG(_l::UnlockGuardEnd())
1802 const TUint KMutexOrdAddresSpace = KMutexOrdKernelHeap + 2;
1803 const TUint KMutexOrdMemoryObject = KMutexOrdKernelHeap + 1;
1804 const TUint KMutexOrdMmuAlloc = KMutexOrdRamAlloc + 1;
1808 //#define FORCE_TRACE
1809 //#define FORCE_TRACE2
1810 //#define FORCE_TRACEB
1811 //#define FORCE_TRACEP
1816 #define TRACE_printf Kern::Printf
1818 #define TRACE_ALWAYS(t) TRACE_printf t
1821 #define TRACE(t) TRACE_printf t
1823 #define TRACE(t) __KTRACE_OPT(KMMU2,TRACE_printf t)
1827 #define TRACE2(t) TRACE_printf t
1829 #define TRACE2(t) __KTRACE_OPT(KMMU2,TRACE_printf t)
1833 #define TRACEB(t) TRACE_printf t
1835 #define TRACEB(t) __KTRACE_OPT2(KMMU,KBOOT,TRACE_printf t)
1839 #define TRACEP(t) TRACE_printf t
1841 #define TRACEP(t) __KTRACE_OPT(KPAGING,TRACE_printf t)
1846 The maximum number of consecutive updates to #SPageInfo structures which
1847 should be executed without releasing the #MmuLock.
1849 This value must be an integer power of two.
1851 const TUint KMaxPageInfoUpdatesInOneGo = 64;
1854 The maximum number of simple operations on memory page state which should
1855 occur without releasing the #MmuLock. Examples of the operations are
1856 read-modify-write of a Page Table Entry (PTE) or entries in a memory objects
1859 This value must be an integer power of two.
1861 const TUint KMaxPagesInOneGo = KMaxPageInfoUpdatesInOneGo/2;
1864 The maximum number of Page Directory Entries which should be updated
1865 without releasing the #MmuLock.
1867 This value must be an integer power of two.
1869 const TUint KMaxPdesInOneGo = KMaxPageInfoUpdatesInOneGo;
1872 /********************************************
1874 ********************************************/
1876 class DRamAllocator;
1881 Interface to RAM allocation and MMU data structure manipulation.
1888 EInvalidRamBankAtBoot,
1889 EInvalidReservedBankAtBoot,
1890 EInvalidPageTableAtBoot,
1892 EBadMappedPageAfterBoot,
1893 ERamAllocMutexCreateFailed,
1894 EBadFreePhysicalRam,
1895 EUnsafePageInfoAccess,
1896 EUnsafePageTableInfoAccess,
1897 EPhysMemSyncMutexCreateFailed,
1902 Attribute flags used when allocating RAM pages.
1905 The least significant bits of these flags are used for the #TMemoryType
1906 value for the memory.
1910 // lower bits hold TMemoryType
1913 If this flag is set, don't wipe the contents of the memory when allocated.
1914 By default, for security and confidentiality reasons, the memory is filled
1915 with a 'wipe' value to erase the previous contents.
1917 EAllocNoWipe = 1<<(KMemoryTypeShift),
1920 If this flag is set, any memory wiping will fill memory with the byte
1921 value starting at bit position #EAllocWipeByteShift in these flags.
1923 EAllocUseCustomWipeByte = 1<<(KMemoryTypeShift+1),
1926 If this flag is set, memory allocation won't attempt to reclaim pages
1927 from the demand paging system.
1928 This is used to prevent deadlock when the paging system itself attempts
1929 to allocate memory for itself.
1931 EAllocNoPagerReclaim = 1<<(KMemoryTypeShift+2),
1939 Bit position within these flags, for the least significant bit of the
1940 byte value used when #EAllocUseCustomWipeByte is set.
1942 EAllocWipeByteShift = 8
1951 void Init2FinalCommon();
1954 static void Panic(TPanic aPanic);
1956 static TInt HandlePageFault(TLinAddr aPc, TLinAddr aFaultAddress, TUint aAccessPermissions, TAny* aExceptionInfo);
1958 TUint FreeRamInPages();
1959 TUint TotalPhysicalRamPages();
1961 TInt AllocRam( TPhysAddr* aPages, TUint aCount, TRamAllocFlags aFlags, TZonePageType aZonePageType,
1962 TUint aBlockZoneId=KRamZoneInvalidId, TBool aBlockRest=EFalse);
1963 void FreeRam(TPhysAddr* aPages, TUint aCount, TZonePageType aZonePageType);
1964 TInt AllocContiguousRam(TPhysAddr& aPhysAddr, TUint aCount, TUint aAlign, TRamAllocFlags aFlags);
1965 void FreeContiguousRam(TPhysAddr aPhysAddr, TUint aCount);
1967 const SRamZone* RamZoneConfig(TRamZoneCallback& aCallback) const;
1968 void SetRamZoneConfig(const SRamZone* aZones, TRamZoneCallback aCallback);
1969 TInt ModifyRamZoneFlags(TUint aId, TUint aClearMask, TUint aSetMask);
1970 TInt GetRamZonePageCount(TUint aId, SRamZonePageCount& aPageData);
1971 TInt ZoneAllocPhysicalRam(TUint* aZoneIdList, TUint aZoneIdCount, TInt aBytes, TPhysAddr& aPhysAddr, TInt aAlign);
1972 TInt ZoneAllocPhysicalRam(TUint* aZoneIdList, TUint aZoneIdCount, TInt aNumPages, TPhysAddr* aPageList);
1973 TInt RamHalFunction(TInt aFunction, TAny* a1, TAny* a2);
1974 void ChangePageType(SPageInfo* aPageInfo, TZonePageType aOldPageType, TZonePageType aNewPageType);
1976 TInt AllocPhysicalRam(TPhysAddr* aPages, TUint aCount, TRamAllocFlags aFlags);
1977 void FreePhysicalRam(TPhysAddr* aPages, TUint aCount);
1978 TInt AllocPhysicalRam(TPhysAddr& aPhysAddr, TUint aCount, TUint aAlign, TRamAllocFlags aFlags);
1979 void FreePhysicalRam(TPhysAddr aPhysAddr, TUint aCount);
1980 TInt ClaimPhysicalRam(TPhysAddr aPhysAddr, TUint aCount, TRamAllocFlags aFlags);
1981 void AllocatedPhysicalRam(TPhysAddr aPhysAddr, TUint aCount, TRamAllocFlags aFlags);
1983 TLinAddr MapTemp(TPhysAddr aPage, TUint aColour, TUint aSlot=0);
1984 void UnmapTemp(TUint aSlot=0);
1985 void RemoveAliasesForPageTable(TPhysAddr aPageTable);
1987 static TBool MapPages(TPte* const aPtePtr, const TUint aCount, TPhysAddr* aPages, TPte aBlankPte);
1988 static TBool UnmapPages(TPte* const aPtePtr, TUint aCount);
1989 static TBool UnmapPages(TPte* const aPtePtr, TUint aCount, TPhysAddr* aPages);
1990 static void RemapPage(TPte* const aPtePtr, TPhysAddr& aPage, TPte aBlankPte);
1991 static void RestrictPagesNA(TPte* const aPtePtr, TUint aCount, TPhysAddr* aPages);
1992 static TBool PageInPages(TPte* const aPtePtr, const TUint aCount, TPhysAddr* aPages, TPte aBlankPte);
1994 // implemented in CPU-specific code...
1995 static TUint PteType(TMappingPermissions aPermissions, TBool aGlobal);
1996 static TUint PdeType(TMemoryAttributes aAttributes);
1997 static TPte BlankPte(TMemoryAttributes aAttributes, TUint aPteType);
1998 static TPde BlankPde(TMemoryAttributes aAttributes);
1999 static TPde BlankSectionPde(TMemoryAttributes aAttributes, TUint aPteType);
2000 static TBool CheckPteTypePermissions(TUint aPteType, TUint aAccessPermissions);
2001 static TMappingPermissions PermissionsFromPteType(TUint aPteType);
2002 void PagesAllocated(TPhysAddr* aPageList, TUint aCount, TRamAllocFlags aFlags, TBool aReallocate=false);
2003 void PageFreed(SPageInfo* aPageInfo);
2004 void CleanAndInvalidatePages(TPhysAddr* aPages, TUint aCount, TMemoryAttributes aAttributes, TUint aColour);
2006 // utils, implemented in CPU-specific code...
2007 static TPde* PageDirectory(TInt aOsAsid);
2008 static TPde* PageDirectoryEntry(TInt aOsAsid, TLinAddr aAddress);
2009 static TPhysAddr PdePhysAddr(TPde aPde);
2010 static TPhysAddr PtePhysAddr(TPte aPte, TUint aPteIndex);
2011 static TPte* PageTableFromPde(TPde aPde);
2012 static TPte* SafePageTableFromPde(TPde aPde);
2013 static TPhysAddr SectionBaseFromPde(TPde aPde);
2014 static TPte* PtePtrFromLinAddr(TLinAddr aAddress, TInt aOsAsid);
2015 static TPte* SafePtePtrFromLinAddr(TLinAddr aAddress, TInt aOsAsid);
2016 static TPhysAddr PageTablePhysAddr(TPte* aPt);
2017 static TPhysAddr LinearToPhysical(TLinAddr aAddr, TInt aOsAsid=KKernelOsAsid);
2018 static TPhysAddr UncheckedLinearToPhysical(TLinAddr aAddr, TInt aOsAsid);
2019 static TPte MakePteInaccessible(TPte aPte, TBool aReadOnly);
2020 static TPte MakePteAccessible(TPte aPte, TBool aWrite);
2021 static TBool IsPteReadOnly(TPte aPte);
2022 static TBool IsPteMoreAccessible(TPte aNewPte, TPte aOldPte);
2023 static TBool IsPteInaccessible(TPte aPte);
2024 static TBool PdeMapsPageTable(TPde aPde);
2025 static TBool PdeMapsSection(TPde aPde);
2027 void SyncPhysicalMemoryBeforeDmaWrite(TPhysAddr* aPages, TUint aColour, TUint aOffset, TUint aSize, TUint32 aMapAttr);
2028 void SyncPhysicalMemoryBeforeDmaRead (TPhysAddr* aPages, TUint aColour, TUint aOffset, TUint aSize, TUint32 aMapAttr);
2029 void SyncPhysicalMemoryAfterDmaRead (TPhysAddr* aPages, TUint aColour, TUint aOffset, TUint aSize, TUint32 aMapAttr);
2031 static TPte SectionToPageEntry(TPde& aPde);
2032 static TPde PageToSectionEntry(TPte aPte, TPde aPde);
2033 static TMemoryAttributes CanonicalMemoryAttributes(TMemoryAttributes aAttr);
2037 Class representing the resources and methods required to create temporary
2038 mappings of physical memory pages in order to make them accessible to
2040 These mare required by various memory model functions and are created only
2046 void Alloc(TUint aNumPages);
2047 TLinAddr Map(TPhysAddr aPage, TUint aColour);
2048 TLinAddr Map(TPhysAddr aPage, TUint aColour, TPte aBlankPte);
2049 TLinAddr Map(TPhysAddr* aPages, TUint aCount, TUint aColour);
2051 void Unmap(TBool aIMBRequired);
2052 FORCE_INLINE TTempMapping()
2056 TLinAddr iLinAddr; ///< Virtual address of the memory page mapped by #iPtePtr.
2057 TPte* iPtePtr; ///< Pointer to first PTE allocated to this object.
2059 TPte iBlankPte; ///< PTE value to use for mapping pages, with the physical address component equal to zero.
2060 TUint8 iSize; ///< Maximum number of pages which can be mapped in one go.
2061 TUint8 iCount; ///< Number of pages currently mapped.
2062 TUint8 iColour; ///< Colour of any pages mapped (acts as index from #iLinAddr and #iPtePtr).
2065 static TLinAddr iNextLinAddr;
2068 enum { KNumTempMappingSlots=2 };
2070 Temporary mappings used by various functions.
2071 Use of these is serialised by the #RamAllocLock.
2073 TTempMapping iTempMap[KNumTempMappingSlots];
2075 TTempMapping iPhysMemSyncTemp; ///< Temporary mapping used for physical memory sync.
2076 DMutex* iPhysMemSyncMutex; ///< Mutex used to serialise use of #iPhysMemSyncTemp.
2079 TPte iTempPteCached; ///< PTE value for cached temporary mappings
2080 TPte iTempPteUncached; ///< PTE value for uncached temporary mappings
2081 TPte iTempPteCacheMaintenance; ///< PTE value for temporary mapping of cache maintenance
2083 DRamAllocator* iRamPageAllocator; ///< The RAM allocator used for managing free RAM pages.
2084 const SRamZone* iRamZones; ///< A pointer to the RAM zone configuration from the variant.
2085 TRamZoneCallback iRamZoneCallback; ///< Pointer to the RAM zone callback function.
2086 Defrag* iDefrag; ///< The RAM defrag class implementation.
2089 A counter incremented every time Mmu::PagesAllocated invalidates the L1 cache.
2090 This is used as part of a cache maintenance optimisation.
2092 TInt iCacheInvalidateCounter;
2095 Number of free RAM pages which are cached at L1 and have
2096 SPageInfo::CacheInvalidateCounter()==#iCacheInvalidateCounter.
2097 This is used as part of a cache maintenance optimisation.
2099 TInt iCacheInvalidatePageCount;
2103 Linked list of threads which have an active IPC alias. I.e. have called
2104 DMemModelThread::Alias. Threads are linked by their DMemModelThread::iAliasLink member.
2105 Updates to this list are protected by the #MmuLock.
2110 The mutex used to protect RAM allocation.
2111 This is the mutex #RamAllocLock operates on.
2113 DMutex* iRamAllocatorMutex;
2117 Number of nested calls to RamAllocLock::Lock.
2119 TUint iRamAllocLockCount;
2122 Set by various memory allocation routines to indicate that a memory allocation
2123 has failed. This is used by #RamAllocLock in its management of out-of-memory
2126 TBool iRamAllocFailed;
2129 Saved value for #FreeRamInPages which is used by #RamAllocLock in its management
2130 of memory level change notifications.
2132 TUint iRamAllocInitialFreePages;
2134 friend class RamAllocLock;
2140 The single instance of class #Mmu.
2147 Perform a page table walk to return the physical address of
2148 the memory mapped at virtual address \a aLinAddr in the
2149 address space \a aOsAsid.
2151 If the page table used was not one allocated by the kernel
2152 then the results are unpredictable and may cause a system fault.
2156 FORCE_INLINE TPhysAddr Mmu::LinearToPhysical(TLinAddr aAddr, TInt aOsAsid)
2158 return Mmu::UncheckedLinearToPhysical(aAddr,aOsAsid);
2163 __ASSERT_COMPILE((Mmu::EAllocFlagLast>>Mmu::EAllocWipeByteShift)==0); // make sure flags don't run into wipe byte value
2167 Create a temporary mapping of a physical page.
2168 The RamAllocatorMutex must be held before this function is called and not released
2169 until after UnmapTemp has been called.
2171 @param aPage The physical address of the page to be mapped.
2172 @param aColour The 'colour' of the page if relevant.
2173 @param aSlot Slot number to use, must be less than Mmu::KNumTempMappingSlots.
2175 @return The linear address of where the page has been mapped.
2177 FORCE_INLINE TLinAddr Mmu::MapTemp(TPhysAddr aPage, TUint aColour, TUint aSlot)
2179 // Kern::Printf("Mmu::MapTemp(0x%08x,%d,%d)",aPage,aColour,aSlot);
2180 __NK_ASSERT_DEBUG(RamAllocLock::IsHeld());
2181 __NK_ASSERT_DEBUG(aSlot<KNumTempMappingSlots);
2182 return iTempMap[aSlot].Map(aPage,aColour);
2187 Remove the temporary mapping created with MapTemp.
2189 @param aSlot Slot number which was used when temp mapping was made.
2191 FORCE_INLINE void Mmu::UnmapTemp(TUint aSlot)
2193 // Kern::Printf("Mmu::UnmapTemp(%d)",aSlot);
2194 __NK_ASSERT_DEBUG(RamAllocLock::IsHeld());
2195 __NK_ASSERT_DEBUG(aSlot<KNumTempMappingSlots);
2196 iTempMap[aSlot].Unmap();
2201 Class representing the resources and arguments needed for various
2202 memory pinning operations.
2204 The term 'replacement pages' in this documentation means excess
2205 RAM pages which have been allocated to the demand paging pool so
2206 that when a demand paged memory is pinned and removed the pool
2207 does not become too small.
2209 Relacement pages are allocated with #AllocReplacementPages and their
2210 number remembered in #iReplacementPages. When a memory pinning operation
2211 removes pages from the paging pool it will reduce #iReplacementPages
2212 accordingly. At the end of the pinning operation, #FreeReplacementPages
2213 is used to free any unused replacement pages.
2219 Boolean value set to true if the requester of the pinning operation
2220 will only read from the pinned memory, not write to it.
2221 This is used as an optimisation to avoid unnecessarily marking
2222 demand paged memory as dirty.
2227 Boolean value set to true if sufficient replacement pages already exists
2228 in the demand paging pool and that #AllocReplacementPages does not need
2229 to actually allocated any.
2234 The number of replacement pages allocated to this object by #AllocReplacementPages.
2235 A value of #EUseReserveForPinReplacementPages indicates that #iUseReserve
2236 was true, and there is sufficient RAM already reserved for the operation
2239 TUint iReplacementPages;
2242 The number of page tables which have been pinned during the course
2243 of an operation. This is the number of valid entries written to
2246 TUint iNumPinnedPageTables;
2249 Pointer to the location to store the addresses of any page tables
2250 which have been pinned during the course of an operation. This is
2251 incremented as entries are added.
2253 The null-pointer indicates that page tables do not require pinning.
2255 TPte** iPinnedPageTables;
2259 Construct an empty TPinArgs, one which owns no resources.
2262 : iReadOnly(0), iUseReserve(0), iReplacementPages(0), iNumPinnedPageTables(0), iPinnedPageTables(0)
2267 Return true if this TPinArgs has at least \a aRequired number of
2268 replacement pages allocated.
2270 FORCE_INLINE TBool HaveSufficientPages(TUint aRequired)
2272 return iReplacementPages>=aRequired; // Note, EUseReserveForPinReplacementPages will always return true.
2276 Allocate replacement pages for this TPinArgs so that it has at least
2279 TInt AllocReplacementPages(TUint aNumPages);
2282 Free all replacement pages which this TPinArgs still owns.
2284 void FreeReplacementPages();
2291 Value used to indicate that replacement pages are to come
2292 from an already allocated reserve and don't need specially
2295 enum { EUseReserveForPinReplacementPages = 0xffffffffu };
2300 inline TPinArgs::~TPinArgs()
2302 __NK_ASSERT_DEBUG(!iReplacementPages);
2308 Enumeration used in various RestrictPages APIs to specify the type of restrictions to apply.
2310 enum TRestrictPagesType
2313 Make all mappings of page not accessible.
2314 Pinned mappings will veto this operation.
2316 ERestrictPagesNoAccess = 1,
2319 Demand paged memory being made 'old'.
2320 Specific case of ERestrictPagesNoAccess.
2322 ERestrictPagesNoAccessForOldPage = ERestrictPagesNoAccess|0x80000000,
2325 For page moving pinned mappings always veto the moving operation.
2327 ERestrictPagesForMovingFlag = 0x40000000,
2330 Movable memory being made no access whilst its being copied.
2331 Special case of ERestrictPagesNoAccess where pinned mappings always veto
2332 this operation even if they are read-only mappings.
2334 ERestrictPagesNoAccessForMoving = ERestrictPagesNoAccess|ERestrictPagesForMovingFlag,