Shared Pseudocode Functions

This page displays common pseudocode functions shared by many pages.

Pseudocodes

Library pseudocode for aarch32/debug/VCRMatch/AArch32.VCRMatch

// AArch32.VCRMatch() // ================== boolean AArch32.VCRMatch(bits(32) vaddress) if UsingAArch32() && ELUsingAArch32(EL1) && IsZero(vaddress<1:0>) && PSTATE.EL != EL2 then // Each bit position in this string corresponds to a bit in DBGVCR and an exception vector. match_word = Zeros(32); if vaddress<31:5> == ExcVectorBase()<31:5> then if HaveEL(EL3) && !IsSecure() then match_word<UInt(vaddress<4:2>) + 24> = '1'; // Non-secure vectors else match_word<UInt(vaddress<4:2>) + 0> = '1'; // Secure vectors (or no EL3) if HaveEL(EL3) && ELUsingAArch32(EL3) && IsSecure() && vaddress<31:5> == MVBAR<31:5> then match_word<UInt(vaddress<4:2>) + 8> = '1'; // Monitor vectors // Mask out bits not corresponding to vectors. if !HaveEL(EL3) then mask = '00000000':'00000000':'00000000':'11011110'; // DBGVCR[31:8] are RES0 elsif !ELUsingAArch32(EL3) then mask = '11011110':'00000000':'00000000':'11011110'; // DBGVCR[15:8] are RES0 else mask = '11011110':'00000000':'11011100':'11011110'; match_word = match_word AND DBGVCR AND mask; match = !IsZero(match_word); // Check for UNPREDICTABLE case - match on Prefetch Abort and Data Abort vectors if !IsZero(match_word<28:27,12:11,4:3>) && DebugTarget() == PSTATE.EL then match = ConstrainUnpredictableBool(Unpredictable_VCMATCHDAPA); else match = FALSE; return match;

Library pseudocode for aarch32/debug/authentication/AArch32.SelfHostedSecurePrivilegedInvasiveDebugEnabled

// AArch32.SelfHostedSecurePrivilegedInvasiveDebugEnabled() // ======================================================== boolean AArch32.SelfHostedSecurePrivilegedInvasiveDebugEnabled() // The definition of this function is IMPLEMENTATION DEFINED. // In the recommended interface, AArch32.SelfHostedSecurePrivilegedInvasiveDebugEnabled returns // the state of the (DBGEN AND SPIDEN) signal. if !HaveEL(EL3) && !IsSecure() then return FALSE; return DBGEN == HIGH && SPIDEN == HIGH;

Library pseudocode for aarch32/debug/breakpoint/AArch32.BreakpointMatch

// AArch32.BreakpointMatch() // ========================= // Breakpoint matching in an AArch32 translation regime. (boolean,boolean) AArch32.BreakpointMatch(integer n, bits(32) vaddress, integer size) assert ELUsingAArch32(S1TranslationRegime()); assert n <= UInt(DBGDIDR.BRPs); enabled = DBGBCR[n].E == '1'; ispriv = PSTATE.EL != EL0; linked = DBGBCR[n].BT == '0x01'; isbreakpnt = TRUE; linked_to = FALSE; state_match = AArch32.StateMatch(DBGBCR[n].SSC, DBGBCR[n].HMC, DBGBCR[n].PMC, linked, DBGBCR[n].LBN, isbreakpnt, ispriv); (value_match, value_mismatch) = AArch32.BreakpointValueMatch(n, vaddress, linked_to); if size == 4 then // Check second halfword // If the breakpoint address and BAS of an Address breakpoint match the address of the // second halfword of an instruction, but not the address of the first halfword, it is // CONSTRAINED UNPREDICTABLE whether or not this breakpoint generates a Breakpoint debug // event. (match_i, mismatch_i) = AArch32.BreakpointValueMatch(n, vaddress + 2, linked_to); if !value_match && match_i then value_match = ConstrainUnpredictableBool(Unpredictable_BPMATCHHALF); if value_mismatch && !mismatch_i then value_mismatch = ConstrainUnpredictableBool(Unpredictable_BPMISMATCHHALF); if vaddress<1> == '1' && DBGBCR[n].BAS == '1111' then // The above notwithstanding, if DBGBCR[n].BAS == '1111', then it is CONSTRAINED // UNPREDICTABLE whether or not a Breakpoint debug event is generated for an instruction // at the address DBGBVR[n]+2. if value_match then value_match = ConstrainUnpredictableBool(Unpredictable_BPMATCHHALF); if !value_mismatch then value_mismatch = ConstrainUnpredictableBool(Unpredictable_BPMISMATCHHALF); match = value_match && state_match && enabled; mismatch = value_mismatch && state_match && enabled; return (match, mismatch);

Library pseudocode for aarch32/debug/breakpoint/AArch32.BreakpointValueMatch

// AArch32.BreakpointValueMatch() // ============================== // The first result is whether an Address Match or Context breakpoint is programmed on the // instruction at "address". The second result is whether an Address Mismatch breakpoint is // programmed on the instruction, that is, whether the instruction should be stepped. (boolean,boolean) AArch32.BreakpointValueMatch(integer n, bits(32) vaddress, boolean linked_to) // "n" is the identity of the breakpoint unit to match against. // "vaddress" is the current instruction address, ignored if linked_to is TRUE and for Context // matching breakpoints. // "linked_to" is TRUE if this is a call from StateMatch for linking. // If a non-existent breakpoint then it is CONSTRAINED UNPREDICTABLE whether this gives // no match or the breakpoint is mapped to another UNKNOWN implemented breakpoint. if n > UInt(DBGDIDR.BRPs) then (c, n) = ConstrainUnpredictableInteger(0, UInt(DBGDIDR.BRPs), Unpredictable_BPNOTIMPL); assert c IN {Constraint_DISABLED, Constraint_UNKNOWN}; if c == Constraint_DISABLED then return (FALSE,FALSE); // If this breakpoint is not enabled, it cannot generate a match. (This could also happen on a // call from StateMatch for linking). if DBGBCR[n].E == '0' then return (FALSE,FALSE); context_aware = (n >= UInt(DBGDIDR.BRPs) - UInt(DBGDIDR.CTX_CMPs)); // If BT is set to a reserved type, behaves either as disabled or as a not-reserved type. type = DBGBCR[n].BT; if ((type IN {'011x','11xx'} && !HaveVirtHostExt()) || // Context matching (type == '010x' && HaltOnBreakpointOrWatchpoint()) || // Address mismatch (type != '0x0x' && !context_aware) || // Context matching (type == '1xxx' && !HaveEL(EL2))) then // EL2 extension (c, type) = ConstrainUnpredictableBits(Unpredictable_RESBPTYPE); assert c IN {Constraint_DISABLED, Constraint_UNKNOWN}; if c == Constraint_DISABLED then return (FALSE,FALSE); // Otherwise the value returned by ConstrainUnpredictableBits must be a not-reserved value // Determine what to compare against. match_addr = (type == '0x0x'); mismatch = (type == '010x'); match_vmid = (type == '10xx'); match_cid1 = (type == 'xx1x'); match_cid2 = (type == '11xx'); linked = (type == 'xxx1'); // If this is a call from StateMatch, return FALSE if the breakpoint is not programmed for a // VMID and/or context ID match, of if not context-aware. The above assertions mean that the // code can just test for match_addr == TRUE to confirm all these things. if linked_to && (!linked || match_addr) then return (FALSE,FALSE); // If called from BreakpointMatch return FALSE for Linked context ID and/or VMID matches. if !linked_to && linked && !match_addr then return (FALSE,FALSE); // Do the comparison. if match_addr then byte = UInt(vaddress<1:0>); assert byte IN {0,2}; // "vaddress" is halfword aligned byte_select_match = (DBGBCR[n].BAS<byte> == '1'); BVR_match = vaddress<31:2> == DBGBVR[n]<31:2> && byte_select_match; elsif match_cid1 then BVR_match = (PSTATE.EL != EL2 && CONTEXTIDR == DBGBVR[n]<31:0>); if match_vmid then if ELUsingAArch32(EL2) then vmid = ZeroExtend(VTTBR.VMID, 16); bvr_vmid = ZeroExtend(DBGBXVR[n]<7:0>, 16); elsif !Have16bitVMID() || VTCR_EL2.VS == '0' then vmid = ZeroExtend(VTTBR_EL2.VMID<7:0>, 16); bvr_vmid = ZeroExtend(DBGBXVR[n]<7:0>, 16); else vmid = VTTBR_EL2.VMID; bvr_vmid = DBGBXVR[n]<15:0>; BXVR_match = (EL2Enabled() && PSTATE.EL IN {EL0,EL1} && vmid == bvr_vmid); elsif match_cid2 then BXVR_match = (!IsSecure() && HaveVirtHostExt() && !ELUsingAArch32(EL2) && DBGBXVR[n]<31:0> == CONTEXTIDR_EL2); bvr_match_valid = (match_addr || match_cid1); bxvr_match_valid = (match_vmid || match_cid2); match = (!bxvr_match_valid || BXVR_match) && (!bvr_match_valid || BVR_match); return (match && !mismatch, !match && mismatch);

Library pseudocode for aarch32/debug/breakpoint/AArch32.StateMatch

// AArch32.StateMatch() // ==================== // Determine whether a breakpoint or watchpoint is enabled in the current mode and state. boolean AArch32.StateMatch(bits(2) SSC, bit HMC, bits(2) PxC, boolean linked, bits(4) LBN, boolean isbreakpnt, boolean ispriv) // "SSC", "HMC", "PxC" are the control fields from the DBGBCR[n] or DBGWCR[n] register. // "linked" is TRUE if this is a linked breakpoint/watchpoint type. // "LBN" is the linked breakpoint number from the DBGBCR[n] or DBGWCR[n] register. // "isbreakpnt" is TRUE for breakpoints, FALSE for watchpoints. // "ispriv" is valid for watchpoints, and selects between privileged and unprivileged accesses. // If parameters are set to a reserved type, behaves as either disabled or a defined type if ((HMC:SSC:PxC) IN {'011xx','100x0','101x0','11010','11101','1111x'} || // Reserved (HMC == '0' && PxC == '00' && !isbreakpnt) || // Usr/Svc/Sys (SSC IN {'01','10'} && !HaveEL(EL3)) || // No EL3 (HMC:SSC:PxC == '11000' && ELUsingAArch32(EL3)) || // AArch64 only (HMC:SSC != '000' && HMC:SSC != '111' && !HaveEL(EL3) && !HaveEL(EL2)) || // No EL3/EL2 (HMC:SSC:PxC == '11100' && !HaveEL(EL2))) then // No EL2 (c, <HMC,SSC,PxC>) = ConstrainUnpredictableBits(Unpredictable_RESBPWPCTRL); assert c IN {Constraint_DISABLED, Constraint_UNKNOWN}; if c == Constraint_DISABLED then return FALSE; // Otherwise the value returned by ConstrainUnpredictableBits must be a not-reserved value PL2_match = HaveEL(EL2) && HMC == '1'; PL1_match = PxC<0> == '1'; PL0_match = PxC<1> == '1'; SSU_match = isbreakpnt && HMC == '0' && PxC == '00' && SSC != '11'; el = PSTATE.EL; if !ispriv && !isbreakpnt then priv_match = PL0_match; elsif SSU_match then priv_match = PSTATE.M IN {M32_User,M32_Svc,M32_System}; else case el of when EL3 priv_match = PL1_match; // EL3 and EL1 are both PL1 when EL2 priv_match = PL2_match; when EL1 priv_match = PL1_match; when EL0 priv_match = PL0_match; case SSC of when '00' security_state_match = TRUE; // Both when '01' security_state_match = !IsSecure(); // Non-secure only when '10' security_state_match = IsSecure(); // Secure only when '11' security_state_match = TRUE; // Both if linked then // "LBN" must be an enabled context-aware breakpoint unit. If it is not context-aware then // it is CONSTRAINED UNPREDICTABLE whether this gives no match, or LBN is mapped to some // UNKNOWN breakpoint that is context-aware. lbn = UInt(LBN); first_ctx_cmp = (UInt(DBGDIDR.BRPs) - UInt(DBGDIDR.CTX_CMPs)); last_ctx_cmp = UInt(DBGDIDR.BRPs); if (lbn < first_ctx_cmp || lbn > last_ctx_cmp) then (c, lbn) = ConstrainUnpredictableInteger(first_ctx_cmp, last_ctx_cmp, Unpredictable_BPNOTCTXCMP); assert c IN {Constraint_DISABLED, Constraint_NONE, Constraint_UNKNOWN}; case c of when Constraint_DISABLED return FALSE; // Disabled when Constraint_NONE linked = FALSE; // No linking // Otherwise ConstrainUnpredictableInteger returned a context-aware breakpoint if linked then vaddress = bits(32) UNKNOWN; linked_to = TRUE; (linked_match,-) = AArch32.BreakpointValueMatch(lbn, vaddress, linked_to); return priv_match && security_state_match && (!linked || linked_match);

Library pseudocode for aarch32/debug/enables/AArch32.GenerateDebugExceptions

// AArch32.GenerateDebugExceptions() // ================================= boolean AArch32.GenerateDebugExceptions() return AArch32.GenerateDebugExceptionsFrom(PSTATE.EL, IsSecure());

Library pseudocode for aarch32/debug/enables/AArch32.GenerateDebugExceptionsFrom

// AArch32.GenerateDebugExceptionsFrom() // ===================================== boolean AArch32.GenerateDebugExceptionsFrom(bits(2) from, boolean secure) if from == EL0 && !ELStateUsingAArch32(EL1, secure) then mask = bit UNKNOWN; // PSTATE.D mask, unused for EL0 case return AArch64.GenerateDebugExceptionsFrom(from, secure, mask); if DBGOSLSR.OSLK == '1' || DoubleLockStatus() || Halted() then return FALSE; if HaveEL(EL3) && secure then spd = if ELUsingAArch32(EL3) then SDCR.SPD else MDCR_EL3.SPD32; if spd<1> == '1' then enabled = spd<0> == '1'; else // SPD == 0b01 is reserved, but behaves the same as 0b00. enabled = AArch32.SelfHostedSecurePrivilegedInvasiveDebugEnabled(); if from == EL0 then enabled = enabled || SDER.SUIDEN == '1'; else enabled = from != EL2; return enabled;

Library pseudocode for aarch32/debug/pmu/AArch32.CheckForPMUOverflow

// AArch32.CheckForPMUOverflow() // ============================= // Signal Performance Monitors overflow IRQ and CTI overflow events boolean AArch32.CheckForPMUOverflow() if !ELUsingAArch32(EL1) then return AArch64.CheckForPMUOverflow(); pmuirq = PMCR.E == '1' && PMINTENSET<31> == '1' && PMOVSSET<31> == '1'; for n = 0 to UInt(PMCR.N) - 1 if HaveEL(EL2) then hpmn = if !ELUsingAArch32(EL2) then MDCR_EL2.HPMN else HDCR.HPMN; hpme = if !ELUsingAArch32(EL2) then MDCR_EL2.HPME else HDCR.HPME; E = (if n < UInt(hpmn) then PMCR.E else hpme); else E = PMCR.E; if E == '1' && PMINTENSET<n> == '1' && PMOVSSET<n> == '1' then pmuirq = TRUE; SetInterruptRequestLevel(InterruptID_PMUIRQ, if pmuirq then HIGH else LOW); CTI_SetEventLevel(CrossTriggerIn_PMUOverflow, if pmuirq then HIGH else LOW); // The request remains set until the condition is cleared. (For example, an interrupt handler // or cross-triggered event handler clears the overflow status flag by writing to PMOVSCLR_EL0.) return pmuirq;

Library pseudocode for aarch32/debug/pmu/AArch32.CountEvents

// AArch32.CountEvents() // ===================== // Return TRUE if counter "n" should count its event. For the cycle counter, n == 31. boolean AArch32.CountEvents(integer n) assert n == 31 || n < UInt(PMCR.N); if !ELUsingAArch32(EL1) then return AArch64.CountEvents(n); // Event counting is disabled in Debug state debug = Halted(); // In Non-secure state, some counters are reserved for EL2 if HaveEL(EL2) then hpmn = if !ELUsingAArch32(EL2) then MDCR_EL2.HPMN else HDCR.HPMN; hpme = if !ELUsingAArch32(EL2) then MDCR_EL2.HPME else HDCR.HPME; E = if n < UInt(hpmn) || n == 31 then PMCR.E else hpme; else E = PMCR.E; enabled = E == '1' && PMCNTENSET<n> == '1'; if !IsSecure() then // Event counting in Non-secure state is allowed unless all of: // * EL2 and the HPMD Extension are implemented // * Executing at EL2 // * PMNx is not reserved for EL2 // * HDCR.HPMD == 1 if HaveHPMDExt() && PSTATE.EL == EL2 && (n < UInt(hpmn) || n == 31) then hpmd = if !ELUsingAArch32(EL2) then MDCR_EL2.HPMD else HDCR.HPMD; prohibited = (hpmd == '1'); else prohibited = FALSE; else // Event counting in Secure state is prohibited unless any one of: // * EL3 is not implemented // * EL3 is using AArch64 and MDCR_EL3.SPME == 1 // * EL3 is using AArch32 and SDCR.SPME == 1 // * Executing at EL0, and SDER.SUNIDEN == 1. spme = (if ELUsingAArch32(EL3) then SDCR.SPME else MDCR_EL3.SPME); prohibited = HaveEL(EL3) && spme == '0' && (PSTATE.EL != EL0 || SDER.SUNIDEN == '0'); // The IMPLEMENTATION DEFINED authentication interface might override software controls if prohibited && !HaveNoSecurePMUDisableOverride() then prohibited = !ExternalSecureNoninvasiveDebugEnabled(); // For the cycle counter, PMCR.DP enables counting when otherwise prohibited if prohibited && n == 31 then prohibited = (PMCR.DP == '1'); // Event counting can be filtered by the {P, U, NSK, NSU, NSH} bits filter = if n == 31 then PMCCFILTR else PMEVTYPER[n]; P = filter<31>; U = filter<30>; NSK = if HaveEL(EL3) then filter<29> else '0'; NSU = if HaveEL(EL3) then filter<28> else '0'; NSH = if HaveEL(EL2) then filter<27> else '0'; case PSTATE.EL of when EL0 filtered = if IsSecure() then U == '1' else U != NSU; when EL1 filtered = if IsSecure() then P == '1' else P != NSK; when EL2 filtered = (NSH == '0'); when EL3 filtered = (P == '1'); return !debug && enabled && !prohibited && !filtered;

Library pseudocode for aarch32/debug/takeexceptiondbg/AArch32.EnterHypModeInDebugState

// AArch32.EnterHypModeInDebugState() // ================================== // Take an exception in Debug state to Hyp mode. AArch32.EnterHypModeInDebugState(ExceptionRecord exception) SynchronizeContext(); assert HaveEL(EL2) && !IsSecure() && ELUsingAArch32(EL2); AArch32.ReportHypEntry(exception); AArch32.WriteMode(M32_Hyp); SPSR[] = bits(32) UNKNOWN; if HaveSSBSExt() then PSTATE.SSBS = bits(1) UNKNOWN; ELR_hyp = bits(32) UNKNOWN; // In Debug state, the PE always execute T32 instructions when in AArch32 state, and // PSTATE.{SS,A,I,F} are not observable so behave as UNKNOWN. PSTATE.T = '1'; // PSTATE.J is RES0 PSTATE.<SS,A,I,F> = bits(4) UNKNOWN; DLR = bits(32) UNKNOWN; DSPSR = bits(32) UNKNOWN; PSTATE.E = HSCTLR.EE; PSTATE.IL = '0'; PSTATE.IT = '00000000'; EDSCR.ERR = '1'; UpdateEDSCRFields(); EndOfInstruction();

Library pseudocode for aarch32/debug/takeexceptiondbg/AArch32.EnterModeInDebugState

// AArch32.EnterModeInDebugState() // =============================== // Take an exception in Debug state to a mode other than Monitor and Hyp mode. AArch32.EnterModeInDebugState(bits(5) target_mode) SynchronizeContext(); assert ELUsingAArch32(EL1) && PSTATE.EL != EL2; if PSTATE.M == M32_Monitor then SCR.NS = '0'; AArch32.WriteMode(target_mode); SPSR[] = bits(32) UNKNOWN; if HaveSSBSExt() then PSTATE.SSBS = bits(1) UNKNOWN; R[14] = bits(32) UNKNOWN; // In Debug state, the PE always execute T32 instructions when in AArch32 state, and // PSTATE.{SS,A,I,F} are not observable so behave as UNKNOWN. PSTATE.T = '1'; // PSTATE.J is RES0 PSTATE.<SS,A,I,F> = bits(4) UNKNOWN; DLR = bits(32) UNKNOWN; DSPSR = bits(32) UNKNOWN; PSTATE.E = SCTLR.EE; PSTATE.IL = '0'; PSTATE.IT = '00000000'; if HavePANExt() && SCTLR.SPAN == '0' then PSTATE.PAN = '1'; EDSCR.ERR = '1'; UpdateEDSCRFields(); // Update EDSCR processor state flags. EndOfInstruction();

Library pseudocode for aarch32/debug/takeexceptiondbg/AArch32.EnterMonitorModeInDebugState

// AArch32.EnterMonitorModeInDebugState() // ====================================== // Take an exception in Debug state to Monitor mode. AArch32.EnterMonitorModeInDebugState() SynchronizeContext(); assert HaveEL(EL3) && ELUsingAArch32(EL3); from_secure = IsSecure(); if PSTATE.M == M32_Monitor then SCR.NS = '0'; AArch32.WriteMode(M32_Monitor); SPSR[] = bits(32) UNKNOWN; if HaveSSBSExt() then PSTATE.SSBS = bits(1) UNKNOWN; R[14] = bits(32) UNKNOWN; // In Debug state, the PE always execute T32 instructions when in AArch32 state, and // PSTATE.{SS,A,I,F} are not observable so behave as UNKNOWN. PSTATE.T = '1'; // PSTATE.J is RES0 PSTATE.<SS,A,I,F> = bits(4) UNKNOWN; DLR = bits(32) UNKNOWN; DSPSR = bits(32) UNKNOWN; PSTATE.E = SCTLR.EE; PSTATE.IL = '0'; PSTATE.IT = '00000000'; if HavePANExt() then if !from_secure then PSTATE.PAN = '0'; elsif SCTLR.SPAN == '0' then PSTATE.PAN = '1'; EDSCR.ERR = '1'; UpdateEDSCRFields(); // Update EDSCR processor state flags. EndOfInstruction();

Library pseudocode for aarch32/debug/watchpoint/AArch32.WatchpointByteMatch

// AArch32.WatchpointByteMatch() // ============================= boolean AArch32.WatchpointByteMatch(integer n, bits(32) vaddress) bottom = if DBGWVR[n]<2> == '1' then 2 else 3; // Word or doubleword byte_select_match = (DBGWCR[n].BAS<UInt(vaddress<bottom-1:0>)> != '0'); mask = UInt(DBGWCR[n].MASK); // If DBGWCR[n].MASK is non-zero value and DBGWCR[n].BAS is not set to '11111111', or // DBGWCR[n].BAS specifies a non-contiguous set of bytes behavior is CONSTRAINED // UNPREDICTABLE. if mask > 0 && !IsOnes(DBGWCR[n].BAS) then byte_select_match = ConstrainUnpredictableBool(Unpredictable_WPMASKANDBAS); else LSB = (DBGWCR[n].BAS AND NOT(DBGWCR[n].BAS - 1)); MSB = (DBGWCR[n].BAS + LSB); if !IsZero(MSB AND (MSB - 1)) then // Not contiguous byte_select_match = ConstrainUnpredictableBool(Unpredictable_WPBASCONTIGUOUS); bottom = 3; // For the whole doubleword // If the address mask is set to a reserved value, the behavior is CONSTRAINED UNPREDICTABLE. if mask > 0 && mask <= 2 then (c, mask) = ConstrainUnpredictableInteger(3, 31, Unpredictable_RESWPMASK); assert c IN {Constraint_DISABLED, Constraint_NONE, Constraint_UNKNOWN}; case c of when Constraint_DISABLED return FALSE; // Disabled when Constraint_NONE mask = 0; // No masking // Otherwise the value returned by ConstrainUnpredictableInteger is a not-reserved value if mask > bottom then WVR_match = (vaddress<31:mask> == DBGWVR[n]<31:mask>); // If masked bits of DBGWVR_EL1[n] are not zero, the behavior is CONSTRAINED UNPREDICTABLE. if WVR_match && !IsZero(DBGWVR[n]<mask-1:bottom>) then WVR_match = ConstrainUnpredictableBool(Unpredictable_WPMASKEDBITS); else WVR_match = vaddress<31:bottom> == DBGWVR[n]<31:bottom>; return WVR_match && byte_select_match;

Library pseudocode for aarch32/debug/watchpoint/AArch32.WatchpointMatch

// AArch32.WatchpointMatch() // ========================= // Watchpoint matching in an AArch32 translation regime. boolean AArch32.WatchpointMatch(integer n, bits(32) vaddress, integer size, boolean ispriv, boolean iswrite) assert ELUsingAArch32(S1TranslationRegime()); assert n <= UInt(DBGDIDR.WRPs); // "ispriv" is FALSE for LDRT/STRT instructions executed at EL1 and all // load/stores at EL0, TRUE for all other load/stores. "iswrite" is TRUE for stores, FALSE for // loads. enabled = DBGWCR[n].E == '1'; linked = DBGWCR[n].WT == '1'; isbreakpnt = FALSE; state_match = AArch32.StateMatch(DBGWCR[n].SSC, DBGWCR[n].HMC, DBGWCR[n].PAC, linked, DBGWCR[n].LBN, isbreakpnt, ispriv); ls_match = (DBGWCR[n].LSC<(if iswrite then 1 else 0)> == '1'); value_match = FALSE; for byte = 0 to size - 1 value_match = value_match || AArch32.WatchpointByteMatch(n, vaddress + byte); return value_match && state_match && ls_match && enabled;

Library pseudocode for aarch32/exceptions/aborts/AArch32.Abort

// AArch32.Abort() // =============== // Abort and Debug exception handling in an AArch32 translation regime. AArch32.Abort(bits(32) vaddress, FaultRecord fault) // Check if routed to AArch64 state route_to_aarch64 = PSTATE.EL == EL0 && !ELUsingAArch32(EL1); if !route_to_aarch64 && EL2Enabled() && !ELUsingAArch32(EL2) then route_to_aarch64 = (HCR_EL2.TGE == '1' || IsSecondStage(fault) || (HaveRASExt() && HCR2.TEA == '1' && IsExternalAbort(fault)) || (IsDebugException(fault) && MDCR_EL2.TDE == '1')); if !route_to_aarch64 && HaveEL(EL3) && !ELUsingAArch32(EL3) then route_to_aarch64 = SCR_EL3.EA == '1' && IsExternalAbort(fault); if route_to_aarch64 then AArch64.Abort(ZeroExtend(vaddress), fault); elsif fault.acctype == AccType_IFETCH then AArch32.TakePrefetchAbortException(vaddress, fault); else AArch32.TakeDataAbortException(vaddress, fault);

Library pseudocode for aarch32/exceptions/aborts/AArch32.AbortSyndrome

// AArch32.AbortSyndrome() // ======================= // Creates an exception syndrome record for Abort exceptions taken to Hyp mode // from an AArch32 translation regime. ExceptionRecord AArch32.AbortSyndrome(Exception type, FaultRecord fault, bits(32) vaddress) exception = ExceptionSyndrome(type); d_side = type == Exception_DataAbort; exception.syndrome = AArch32.FaultSyndrome(d_side, fault); exception.vaddress = ZeroExtend(vaddress); if IPAValid(fault) then exception.ipavalid = TRUE; exception.NS = fault.ipaddress.NS; exception.ipaddress = ZeroExtend(fault.ipaddress.address); else exception.ipavalid = FALSE; return exception;

Library pseudocode for aarch32/exceptions/aborts/AArch32.CheckPCAlignment

// AArch32.CheckPCAlignment() // ========================== AArch32.CheckPCAlignment() bits(32) pc = ThisInstrAddr(); if (CurrentInstrSet() == InstrSet_A32 && pc<1> == '1') || pc<0> == '1' then if AArch32.GeneralExceptionsToAArch64() then AArch64.PCAlignmentFault(); // Generate an Alignment fault Prefetch Abort exception vaddress = pc; acctype = AccType_IFETCH; iswrite = FALSE; secondstage = FALSE; AArch32.Abort(vaddress, AArch32.AlignmentFault(acctype, iswrite, secondstage));

Library pseudocode for aarch32/exceptions/aborts/AArch32.ReportDataAbort

// AArch32.ReportDataAbort() // ========================= // Report syndrome information for aborts taken to modes other than Hyp mode. AArch32.ReportDataAbort(boolean route_to_monitor, FaultRecord fault, bits(32) vaddress) // The encoding used in the IFSR or DFSR can be Long-descriptor format or Short-descriptor // format. Normally, the current translation table format determines the format. For an abort // from Non-secure state to Monitor mode, the IFSR or DFSR uses the Long-descriptor format if // any of the following applies: // * The Secure TTBCR.EAE is set to 1. // * The abort is synchronous and either: // - It is taken from Hyp mode. // - It is taken from EL1 or EL0, and the Non-secure TTBCR.EAE is set to 1. long_format = FALSE; if route_to_monitor && !IsSecure() then long_format = TTBCR_S.EAE == '1'; if !IsSErrorInterrupt(fault) && !long_format then long_format = PSTATE.EL == EL2 || TTBCR.EAE == '1'; else long_format = TTBCR.EAE == '1'; d_side = TRUE; if long_format then syndrome = AArch32.FaultStatusLD(d_side, fault); else syndrome = AArch32.FaultStatusSD(d_side, fault); if fault.acctype == AccType_IC then if (!long_format && boolean IMPLEMENTATION_DEFINED "Report I-cache maintenance fault in IFSR") then i_syndrome = syndrome; syndrome<10,3:0> = EncodeSDFSC(Fault_ICacheMaint, 1); else i_syndrome = bits(32) UNKNOWN; if route_to_monitor then IFSR_S = i_syndrome; else IFSR = i_syndrome; if route_to_monitor then DFSR_S = syndrome; DFAR_S = vaddress; else DFSR = syndrome; DFAR = vaddress; return;

Library pseudocode for aarch32/exceptions/aborts/AArch32.ReportPrefetchAbort

// AArch32.ReportPrefetchAbort() // ============================= // Report syndrome information for aborts taken to modes other than Hyp mode. AArch32.ReportPrefetchAbort(boolean route_to_monitor, FaultRecord fault, bits(32) vaddress) // The encoding used in the IFSR can be Long-descriptor format or Short-descriptor format. // Normally, the current translation table format determines the format. For an abort from // Non-secure state to Monitor mode, the IFSR uses the Long-descriptor format if any of the // following applies: // * The Secure TTBCR.EAE is set to 1. // * It is taken from Hyp mode. // * It is taken from EL1 or EL0, and the Non-secure TTBCR.EAE is set to 1. long_format = FALSE; if route_to_monitor && !IsSecure() then long_format = TTBCR_S.EAE == '1' || PSTATE.EL == EL2 || TTBCR.EAE == '1'; else long_format = TTBCR.EAE == '1'; d_side = FALSE; if long_format then fsr = AArch32.FaultStatusLD(d_side, fault); else fsr = AArch32.FaultStatusSD(d_side, fault); if route_to_monitor then IFSR_S = fsr; IFAR_S = vaddress; else IFSR = fsr; IFAR = vaddress; return;

Library pseudocode for aarch32/exceptions/aborts/AArch32.TakeDataAbortException

// AArch32.TakeDataAbortException() // ================================ AArch32.TakeDataAbortException(bits(32) vaddress, FaultRecord fault) route_to_monitor = HaveEL(EL3) && SCR.EA == '1' && IsExternalAbort(fault); route_to_hyp = (HaveEL(EL2) && !IsSecure() && PSTATE.EL IN {EL0,EL1} && (HCR.TGE == '1' || IsSecondStage(fault) || (HaveRASExt() && HCR2.TEA == '1' && IsExternalAbort(fault)) || (IsDebugException(fault) && HDCR.TDE == '1'))); bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x10; lr_offset = 8; if IsDebugException(fault) then DBGDSCRext.MOE = fault.debugmoe; if route_to_monitor then AArch32.ReportDataAbort(route_to_monitor, fault, vaddress); AArch32.EnterMonitorMode(preferred_exception_return, lr_offset, vect_offset); elsif PSTATE.EL == EL2 || route_to_hyp then exception = AArch32.AbortSyndrome(Exception_DataAbort, fault, vaddress); if PSTATE.EL == EL2 then AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset); else AArch32.EnterHypMode(exception, preferred_exception_return, 0x14); else AArch32.ReportDataAbort(route_to_monitor, fault, vaddress); AArch32.EnterMode(M32_Abort, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/aborts/AArch32.TakePrefetchAbortException

// AArch32.TakePrefetchAbortException() // ==================================== AArch32.TakePrefetchAbortException(bits(32) vaddress, FaultRecord fault) route_to_monitor = HaveEL(EL3) && SCR.EA == '1' && IsExternalAbort(fault); route_to_hyp = (HaveEL(EL2) && !IsSecure() && PSTATE.EL IN {EL0,EL1} && (HCR.TGE == '1' || IsSecondStage(fault) || (HaveRASExt() && HCR2.TEA == '1' && IsExternalAbort(fault)) || (IsDebugException(fault) && HDCR.TDE == '1'))); bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0C; lr_offset = 4; if IsDebugException(fault) then DBGDSCRext.MOE = fault.debugmoe; if route_to_monitor then AArch32.ReportPrefetchAbort(route_to_monitor, fault, vaddress); AArch32.EnterMonitorMode(preferred_exception_return, lr_offset, vect_offset); elsif PSTATE.EL == EL2 || route_to_hyp then if fault.type == Fault_Alignment then // PC Alignment fault exception = ExceptionSyndrome(Exception_PCAlignment); exception.vaddress = ThisInstrAddr(); else exception = AArch32.AbortSyndrome(Exception_InstructionAbort, fault, vaddress); if PSTATE.EL == EL2 then AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset); else AArch32.EnterHypMode(exception, preferred_exception_return, 0x14); else AArch32.ReportPrefetchAbort(route_to_monitor, fault, vaddress); AArch32.EnterMode(M32_Abort, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/aborts/BranchTargetException

// BranchTargetException // ===================== // Raise branch target exception. AArch64.BranchTargetException(bits(52) vaddress) route_to_el2 = EL2Enabled() && PSTATE.EL == EL0 && HCR_EL2.TGE == '1'; bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_BranchTarget); exception.syndrome<1:0> = PSTATE.BTYPE; exception.syndrome<24:2> = Zeros(); // RES0 if UInt(PSTATE.EL) > UInt(EL1) then AArch64.TakeException(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch32/exceptions/aborts/EffectiveTCF

// EffectiveTCF() // ============== // Returns the TCF field applied to Tag Check Fails in the given Exception Level bits(2) EffectiveTCF(bits(2) el) if el == EL3 then tcf = SCTLR_EL3.TCF; elsif el == EL2 then tcf = SCTLR_EL2.TCF; elsif el == EL1 then tcf = SCTLR_EL1.TCF; elsif el == EL0 && HCR_EL2.<E2H,TGE> == '11' then tcf = SCTLR_EL2.TCF0; elsif el == EL0 && HCR_EL2.<E2H,TGE> != '11' then tcf = SCTLR_EL1.TCF0; return tcf;

Library pseudocode for aarch32/exceptions/aborts/RecordTagCheckFail

// RecordTagCheckFail() // ==================== // Records a tag fail exception into the appropriate TCFR_ELx ReportTagCheckFail(bits(2) el, bit ttbr) if el == EL3 then assert ttbr == '0'; TFSR_EL3.TF0 = '1'; elsif el == EL2 then if ttbr == '0' then TFSR_EL2.TF0 = '1'; else TFSR_EL2.TF1 = '1'; elsif el == EL1 then if ttbr == '0' then TFSR_EL1.TF0 = '1'; else TFSR_EL1.TF1 = '1'; elsif el == EL0 then if ttbr == '0' then TFSRE0_EL1.TF0 = '1'; else TFSRE0_EL1.TF1 = '1';

Library pseudocode for aarch32/exceptions/aborts/TagCheckFail

// TagCheckFail() // ============== // Handle a tag check fail condition TagCheckFail(bits(64) vaddress, boolean iswrite) bits(2) tcf = EffectiveTCF(PSTATE.EL); if tcf == '01' then TagCheckFault(vaddress, iswrite); elsif tcf == '10' then ReportTagCheckFail(PSTATE.EL, vaddress<55>);

Library pseudocode for aarch32/exceptions/aborts/TagCheckFault

// TagCheckFault() // =============== // Raise a tag check fail exception. TagCheckFault(bits(64) va, boolean write) bits(2) target_el; bits(64) preferred_exception_return = ThisInstrAddr(); integer vect_offset = 0x0; if PSTATE.EL == EL0 then target_el = if HCR_EL2.TGE == 0 then EL1 else EL2; else target_el = PSTATE.EL; exception = ExceptionSyndrome(Exception_DataAbort); exception.syndrome<5:0> = '010001'; if write then exception.syndrome<6> = '1'; exception.vaddress = va; AArch64.TakeException(target_el, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch32/exceptions/asynch/AArch32.TakePhysicalFIQException

// AArch32.TakePhysicalFIQException() // ================================== AArch32.TakePhysicalFIQException() // Check if routed to AArch64 state route_to_aarch64 = PSTATE.EL == EL0 && !ELUsingAArch32(EL1); if !route_to_aarch64 && EL2Enabled() && !ELUsingAArch32(EL2) then route_to_aarch64 = HCR_EL2.TGE == '1' || (HCR_EL2.FMO == '1' && !IsInHost()); if !route_to_aarch64 && HaveEL(EL3) && !ELUsingAArch32(EL3) then route_to_aarch64 = SCR_EL3.FIQ == '1'; if route_to_aarch64 then AArch64.TakePhysicalFIQException(); route_to_monitor = HaveEL(EL3) && SCR.FIQ == '1'; route_to_hyp = (EL2Enabled() && PSTATE.EL IN {EL0,EL1} && (HCR.TGE == '1' || HCR.FMO == '1')); bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x1C; lr_offset = 4; if route_to_monitor then AArch32.EnterMonitorMode(preferred_exception_return, lr_offset, vect_offset); elsif PSTATE.EL == EL2 || route_to_hyp then exception = ExceptionSyndrome(Exception_FIQ); AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset); else AArch32.EnterMode(M32_FIQ, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/asynch/AArch32.TakePhysicalIRQException

// AArch32.TakePhysicalIRQException() // ================================== // Take an enabled physical IRQ exception. AArch32.TakePhysicalIRQException() // Check if routed to AArch64 state route_to_aarch64 = PSTATE.EL == EL0 && !ELUsingAArch32(EL1); if !route_to_aarch64 && EL2Enabled() && !ELUsingAArch32(EL2) then route_to_aarch64 = HCR_EL2.TGE == '1' || (HCR_EL2.IMO == '1' && !IsInHost()); if !route_to_aarch64 && HaveEL(EL3) && !ELUsingAArch32(EL3) then route_to_aarch64 = SCR_EL3.IRQ == '1'; if route_to_aarch64 then AArch64.TakePhysicalIRQException(); route_to_monitor = HaveEL(EL3) && SCR.IRQ == '1'; route_to_hyp = (EL2Enabled() && PSTATE.EL IN {EL0,EL1} && (HCR.TGE == '1' || HCR.IMO == '1')); bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x18; lr_offset = 4; if route_to_monitor then AArch32.EnterMonitorMode(preferred_exception_return, lr_offset, vect_offset); elsif PSTATE.EL == EL2 || route_to_hyp then exception = ExceptionSyndrome(Exception_IRQ); AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset); else AArch32.EnterMode(M32_IRQ, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/asynch/AArch32.TakePhysicalSErrorException

// AArch32.TakePhysicalSErrorException() // ===================================== AArch32.TakePhysicalSErrorException(boolean parity, bit extflag, bits(2) errortype, boolean impdef_syndrome, bits(24) full_syndrome) ClearPendingPhysicalSError(); // Check if routed to AArch64 state route_to_aarch64 = PSTATE.EL == EL0 && !ELUsingAArch32(EL1); if !route_to_aarch64 && EL2Enabled() && !ELUsingAArch32(EL2) then route_to_aarch64 = (HCR_EL2.TGE == '1' || (!IsInHost() && HCR_EL2.AMO == '1')); if !route_to_aarch64 && HaveEL(EL3) && !ELUsingAArch32(EL3) then route_to_aarch64 = SCR_EL3.EA == '1'; if route_to_aarch64 then AArch64.TakePhysicalSErrorException(impdef_syndrome, full_syndrome); route_to_monitor = HaveEL(EL3) && SCR.EA == '1'; route_to_hyp = (EL2Enabled() && PSTATE.EL IN {EL0,EL1} && (HCR.TGE == '1' || HCR.AMO == '1')); bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x10; lr_offset = 8; fault = AArch32.AsynchExternalAbort(parity, errortype, extflag); vaddress = bits(32) UNKNOWN; if route_to_monitor then AArch32.ReportDataAbort(route_to_monitor, fault, vaddress); AArch32.EnterMonitorMode(preferred_exception_return, lr_offset, vect_offset); elsif PSTATE.EL == EL2 || route_to_hyp then exception = AArch32.AbortSyndrome(Exception_DataAbort, fault, vaddress); if PSTATE.EL == EL2 then AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset); else AArch32.EnterHypMode(exception, preferred_exception_return, 0x14); else AArch32.ReportDataAbort(route_to_monitor, fault, vaddress); AArch32.EnterMode(M32_Abort, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/asynch/AArch32.TakeVirtualFIQException

// AArch32.TakeVirtualFIQException() // ================================= AArch32.TakeVirtualFIQException() assert EL2Enabled() && PSTATE.EL IN {EL0,EL1}; if ELUsingAArch32(EL2) then // Virtual IRQ enabled if TGE==0 and FMO==1 assert HCR.TGE == '0' && HCR.FMO == '1'; else assert HCR_EL2.TGE == '0' && HCR_EL2.FMO == '1'; // Check if routed to AArch64 state if PSTATE.EL == EL0 && !ELUsingAArch32(EL1) then AArch64.TakeVirtualFIQException(); bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x1C; lr_offset = 4; AArch32.EnterMode(M32_FIQ, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/asynch/AArch32.TakeVirtualIRQException

// AArch32.TakeVirtualIRQException() // ================================= AArch32.TakeVirtualIRQException() assert EL2Enabled() && PSTATE.EL IN {EL0,EL1}; if ELUsingAArch32(EL2) then // Virtual IRQs enabled if TGE==0 and IMO==1 assert HCR.TGE == '0' && HCR.IMO == '1'; else assert HCR_EL2.TGE == '0' && HCR_EL2.IMO == '1'; // Check if routed to AArch64 state if PSTATE.EL == EL0 && !ELUsingAArch32(EL1) then AArch64.TakeVirtualIRQException(); bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x18; lr_offset = 4; AArch32.EnterMode(M32_IRQ, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/asynch/AArch32.TakeVirtualSErrorException

// AArch32.TakeVirtualSErrorException() // ==================================== AArch32.TakeVirtualSErrorException(bit extflag, bits(2) errortype, boolean impdef_syndrome, bits(24) full_syndrome) assert EL2Enabled() && PSTATE.EL IN {EL0,EL1}; if ELUsingAArch32(EL2) then // Virtual SError enabled if TGE==0 and AMO==1 assert HCR.TGE == '0' && HCR.AMO == '1'; else assert HCR_EL2.TGE == '0' && HCR_EL2.AMO == '1'; // Check if routed to AArch64 state if PSTATE.EL == EL0 && !ELUsingAArch32(EL1) then AArch64.TakeVirtualSErrorException(impdef_syndrome, full_syndrome); route_to_monitor = FALSE; bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x10; lr_offset = 8; vaddress = bits(32) UNKNOWN; parity = FALSE; if HaveRASExt() then if ELUsingAArch32(EL2) then fault = AArch32.AsynchExternalAbort(FALSE, VDFSR.AET, VDFSR.ExT); else fault = AArch32.AsynchExternalAbort(FALSE, VSESR_EL2.AET, VSESR_EL2.ExT); else fault = AArch32.AsynchExternalAbort(parity, errortype, extflag); ClearPendingVirtualSError(); AArch32.ReportDataAbort(route_to_monitor, fault, vaddress); AArch32.EnterMode(M32_Abort, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/debug/AArch32.SoftwareBreakpoint

// AArch32.SoftwareBreakpoint() // ============================ AArch32.SoftwareBreakpoint(bits(16) immediate) if (EL2Enabled() && !ELUsingAArch32(EL2) && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1')) || !ELUsingAArch32(EL1) then AArch64.SoftwareBreakpoint(immediate); vaddress = bits(32) UNKNOWN; acctype = AccType_IFETCH; // Take as a Prefetch Abort iswrite = FALSE; entry = DebugException_BKPT; fault = AArch32.DebugFault(acctype, iswrite, entry); AArch32.Abort(vaddress, fault);

Library pseudocode for aarch32/exceptions/debug/DebugException

constant bits(4) DebugException_Breakpoint = '0001'; constant bits(4) DebugException_BKPT = '0011'; constant bits(4) DebugException_VectorCatch = '0101'; constant bits(4) DebugException_Watchpoint = '1010';

Library pseudocode for aarch32/exceptions/exceptions/AArch32.CheckAdvSIMDOrFPRegisterTraps

// AArch32.CheckAdvSIMDOrFPRegisterTraps() // ======================================= // Check if an instruction that accesses an Advanced SIMD and // floating-point System register is trapped by an appropriate HCR.TIDx // ID group trap control. AArch32.CheckAdvSIMDOrFPRegisterTraps(bits(4) reg) if PSTATE.EL == EL1 && EL2Enabled() then tid0 = if ELUsingAArch32(EL2) then HCR.TID0 else HCR_EL2.TID0; tid3 = if ELUsingAArch32(EL2) then HCR.TID3 else HCR_EL2.TID3; if (tid0 == '1' && reg == '0000') // FPSID || (tid3 == '1' && reg IN {'0101', '0110', '0111'}) then // MVFRx if ELUsingAArch32(EL2) then AArch32.AArch32SystemAccessTrap(EL2, ThisInstr()); else AArch64.AArch32SystemAccessTrap(EL2, ThisInstr());

Library pseudocode for aarch32/exceptions/exceptions/AArch32.ExceptionClass

// AArch32.ExceptionClass() // ======================== // Return the Exception Class and Instruction Length fields for reported in HSR (integer,bit) AArch32.ExceptionClass(Exception type) il = if ThisInstrLength() == 32 then '1' else '0'; case type of when Exception_Uncategorized ec = 0x00; il = '1'; when Exception_WFxTrap ec = 0x01; when Exception_CP15RTTrap ec = 0x03; when Exception_CP15RRTTrap ec = 0x04; when Exception_CP14RTTrap ec = 0x05; when Exception_CP14DTTrap ec = 0x06; when Exception_AdvSIMDFPAccessTrap ec = 0x07; when Exception_FPIDTrap ec = 0x08; when Exception_PACTrap ec = 0x09; when Exception_CP14RRTTrap ec = 0x0C; when Exception_BranchTarget ec = 0x0D; when Exception_IllegalState ec = 0x0E; il = '1'; when Exception_SupervisorCall ec = 0x11; when Exception_HypervisorCall ec = 0x12; when Exception_MonitorCall ec = 0x13; when Exception_ERetTrap ec = 0x1A; when Exception_InstructionAbort ec = 0x20; il = '1'; when Exception_PCAlignment ec = 0x22; il = '1'; when Exception_DataAbort ec = 0x24; when Exception_NV2DataAbort ec = 0x25; when Exception_FPTrappedException ec = 0x28; otherwise Unreachable(); if ec IN {0x20,0x24} && PSTATE.EL == EL2 then ec = ec + 1; return (ec,il);

Library pseudocode for aarch32/exceptions/exceptions/AArch32.GeneralExceptionsToAArch64

// AArch32.GeneralExceptionsToAArch64() // ==================================== // Returns TRUE if exceptions normally routed to EL1 are being handled at an Exception // level using AArch64, because either EL1 is using AArch64 or TGE is in force and EL2 // is using AArch64. boolean AArch32.GeneralExceptionsToAArch64() return ((PSTATE.EL == EL0 && !ELUsingAArch32(EL1)) || (EL2Enabled() && !ELUsingAArch32(EL2) && HCR_EL2.TGE == '1'));

Library pseudocode for aarch32/exceptions/exceptions/AArch32.ReportHypEntry

// AArch32.ReportHypEntry() // ======================== // Report syndrome information to Hyp mode registers. AArch32.ReportHypEntry(ExceptionRecord exception) Exception type = exception.type; (ec,il) = AArch32.ExceptionClass(type); iss = exception.syndrome; // IL is not valid for Data Abort exceptions without valid instruction syndrome information if ec IN {0x24,0x25} && iss<24> == '0' then il = '1'; HSR = ec<5:0>:il:iss; if type IN {Exception_InstructionAbort, Exception_PCAlignment} then HIFAR = exception.vaddress<31:0>; HDFAR = bits(32) UNKNOWN; elsif type == Exception_DataAbort then HIFAR = bits(32) UNKNOWN; HDFAR = exception.vaddress<31:0>; if exception.ipavalid then HPFAR<31:4> = exception.ipaddress<39:12>; else HPFAR<31:4> = bits(28) UNKNOWN; return;

Library pseudocode for aarch32/exceptions/exceptions/AArch32.ResetControlRegisters

// Resets System registers and memory-mapped control registers that have architecturally-defined // reset values to those values. AArch32.ResetControlRegisters(boolean cold_reset);

Library pseudocode for aarch32/exceptions/exceptions/AArch32.TakeReset

// AArch32.TakeReset() // =================== // Reset into AArch32 state AArch32.TakeReset(boolean cold_reset) assert HighestELUsingAArch32(); // Enter the highest implemented Exception level in AArch32 state if HaveEL(EL3) then AArch32.WriteMode(M32_Svc); SCR.NS = '0'; // Secure state elsif HaveEL(EL2) then AArch32.WriteMode(M32_Hyp); else AArch32.WriteMode(M32_Svc); // Reset the CP14 and CP15 registers and other system components AArch32.ResetControlRegisters(cold_reset); FPEXC.EN = '0'; // Reset all other PSTATE fields, including instruction set and endianness according to the // SCTLR values produced by the above call to ResetControlRegisters() PSTATE.<A,I,F> = '111'; // All asynchronous exceptions masked PSTATE.IT = '00000000'; // IT block state reset PSTATE.T = SCTLR.TE; // Instruction set: TE=0: A32, TE=1: T32. PSTATE.J is RES0. PSTATE.E = SCTLR.EE; // Endianness: EE=0: little-endian, EE=1: big-endian PSTATE.IL = '0'; // Clear Illegal Execution state bit // All registers, bits and fields not reset by the above pseudocode or by the BranchTo() call // below are UNKNOWN bitstrings after reset. In particular, the return information registers // R14 or ELR_hyp and SPSR have UNKNOWN values, so that it // is impossible to return from a reset in an architecturally defined way. AArch32.ResetGeneralRegisters(); AArch32.ResetSIMDFPRegisters(); AArch32.ResetSpecialRegisters(); ResetExternalDebugRegisters(cold_reset); bits(32) rv; // IMPLEMENTATION DEFINED reset vector if HaveEL(EL3) then if MVBAR<0> == '1' then // Reset vector in MVBAR rv = MVBAR<31:1>:'0'; else rv = bits(32) IMPLEMENTATION_DEFINED "reset vector address"; else rv = RVBAR<31:1>:'0'; // The reset vector must be correctly aligned assert rv<0> == '0' && (PSTATE.T == '1' || rv<1> == '0'); BranchTo(rv, BranchType_RESET);

Library pseudocode for aarch32/exceptions/exceptions/ExcVectorBase

// ExcVectorBase() // =============== bits(32) ExcVectorBase() if SCTLR.V == '1' then // Hivecs selected, base = 0xFFFF0000 return Ones(16):Zeros(16); else return VBAR<31:5>:Zeros(5);

Library pseudocode for aarch32/exceptions/ieeefp/AArch32.FPTrappedException

// AArch32.FPTrappedException() // ============================ AArch32.FPTrappedException(bits(8) accumulated_exceptions) if AArch32.GeneralExceptionsToAArch64() then is_ase = FALSE; element = 0; AArch64.FPTrappedException(is_ase, element, accumulated_exceptions); FPEXC.DEX = '1'; FPEXC.TFV = '1'; FPEXC<7,4:0> = accumulated_exceptions<7,4:0>; // IDF,IXF,UFF,OFF,DZF,IOF FPEXC<10:8> = '111'; // VECITR is RES1 AArch32.TakeUndefInstrException();

Library pseudocode for aarch32/exceptions/syscalls/AArch32.CallHypervisor

// AArch32.CallHypervisor() // ======================== // Performs a HVC call AArch32.CallHypervisor(bits(16) immediate) assert HaveEL(EL2); if !ELUsingAArch32(EL2) then AArch64.CallHypervisor(immediate); else AArch32.TakeHVCException(immediate);

Library pseudocode for aarch32/exceptions/syscalls/AArch32.CallSupervisor

// AArch32.CallSupervisor() // ======================== // Calls the Supervisor AArch32.CallSupervisor(bits(16) immediate) if AArch32.CurrentCond() != '1110' then immediate = bits(16) UNKNOWN; if AArch32.GeneralExceptionsToAArch64() then AArch64.CallSupervisor(immediate); else AArch32.TakeSVCException(immediate);

Library pseudocode for aarch32/exceptions/syscalls/AArch32.TakeHVCException

// AArch32.TakeHVCException() // ========================== AArch32.TakeHVCException(bits(16) immediate) assert HaveEL(EL2) && ELUsingAArch32(EL2); AArch32.ITAdvance(); SSAdvance(); bits(32) preferred_exception_return = NextInstrAddr(); vect_offset = 0x08; exception = ExceptionSyndrome(Exception_HypervisorCall); exception.syndrome<15:0> = immediate; if PSTATE.EL == EL2 then AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset); else AArch32.EnterHypMode(exception, preferred_exception_return, 0x14);

Library pseudocode for aarch32/exceptions/syscalls/AArch32.TakeSMCException

// AArch32.TakeSMCException() // ========================== AArch32.TakeSMCException() assert HaveEL(EL3) && ELUsingAArch32(EL3); AArch32.ITAdvance(); SSAdvance(); bits(32) preferred_exception_return = NextInstrAddr(); vect_offset = 0x08; lr_offset = 0; AArch32.EnterMonitorMode(preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/syscalls/AArch32.TakeSVCException

// AArch32.TakeSVCException() // ========================== AArch32.TakeSVCException(bits(16) immediate) AArch32.ITAdvance(); SSAdvance(); route_to_hyp = EL2Enabled() && PSTATE.EL == EL0 && HCR.TGE == '1'; bits(32) preferred_exception_return = NextInstrAddr(); vect_offset = 0x08; lr_offset = 0; if PSTATE.EL == EL2 || route_to_hyp then exception = ExceptionSyndrome(Exception_SupervisorCall); exception.syndrome<15:0> = immediate; if PSTATE.EL == EL2 then AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset); else AArch32.EnterHypMode(exception, preferred_exception_return, 0x14); else AArch32.EnterMode(M32_Svc, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/takeexception/AArch32.EnterHypMode

// AArch32.EnterHypMode() // ====================== // Take an exception to Hyp mode. AArch32.EnterHypMode(ExceptionRecord exception, bits(32) preferred_exception_return, integer vect_offset) SynchronizeContext(); assert HaveEL(EL2) && !IsSecure() && ELUsingAArch32(EL2); spsr = GetPSRFromPSTATE(); if !(exception.type IN {Exception_IRQ, Exception_FIQ}) then AArch32.ReportHypEntry(exception); AArch32.WriteMode(M32_Hyp); SPSR[] = spsr; if HaveSSBSExt() then PSTATE.SSBS = HSCTLR.DSSBS; ELR_hyp = preferred_exception_return; PSTATE.T = HSCTLR.TE; // PSTATE.J is RES0 PSTATE.SS = '0'; if !HaveEL(EL3) || SCR_GEN[].EA == '0' then PSTATE.A = '1'; if !HaveEL(EL3) || SCR_GEN[].IRQ == '0' then PSTATE.I = '1'; if !HaveEL(EL3) || SCR_GEN[].FIQ == '0' then PSTATE.F = '1'; PSTATE.E = HSCTLR.EE; PSTATE.IL = '0'; PSTATE.IT = '00000000'; BranchTo(HVBAR<31:5>:vect_offset<4:0>, BranchType_EXCEPTION); EndOfInstruction();

Library pseudocode for aarch32/exceptions/takeexception/AArch32.EnterMode

// AArch32.EnterMode() // =================== // Take an exception to a mode other than Monitor and Hyp mode. AArch32.EnterMode(bits(5) target_mode, bits(32) preferred_exception_return, integer lr_offset, integer vect_offset) SynchronizeContext(); assert ELUsingAArch32(EL1) && PSTATE.EL != EL2; spsr = GetPSRFromPSTATE(); if PSTATE.M == M32_Monitor then SCR.NS = '0'; AArch32.WriteMode(target_mode); SPSR[] = spsr; if HaveSSBSExt() then PSTATE.SSBS = SCTLR.DSSBS; R[14] = preferred_exception_return + lr_offset; PSTATE.T = SCTLR.TE; // PSTATE.J is RES0 PSTATE.SS = '0'; if target_mode == M32_FIQ then PSTATE.<A,I,F> = '111'; elsif target_mode IN {M32_Abort, M32_IRQ} then PSTATE.<A,I> = '11'; else PSTATE.I = '1'; PSTATE.E = SCTLR.EE; PSTATE.IL = '0'; PSTATE.IT = '00000000'; if HavePANExt() && SCTLR.SPAN == '0' then PSTATE.PAN = '1'; BranchTo(ExcVectorBase()<31:5>:vect_offset<4:0>, BranchType_EXCEPTION); EndOfInstruction();

Library pseudocode for aarch32/exceptions/takeexception/AArch32.EnterMonitorMode

// AArch32.EnterMonitorMode() // ========================== // Take an exception to Monitor mode. AArch32.EnterMonitorMode(bits(32) preferred_exception_return, integer lr_offset, integer vect_offset) SynchronizeContext(); assert HaveEL(EL3) && ELUsingAArch32(EL3); from_secure = IsSecure(); spsr = GetPSRFromPSTATE(); if PSTATE.M == M32_Monitor then SCR.NS = '0'; AArch32.WriteMode(M32_Monitor); SPSR[] = spsr; if HaveSSBSExt() then PSTATE.SSBS = SCTLR.DSSBS; R[14] = preferred_exception_return + lr_offset; PSTATE.T = SCTLR.TE; // PSTATE.J is RES0 PSTATE.SS = '0'; PSTATE.<A,I,F> = '111'; PSTATE.E = SCTLR.EE; PSTATE.IL = '0'; PSTATE.IT = '00000000'; if HavePANExt() then if !from_secure then PSTATE.PAN = '0'; elsif SCTLR.SPAN == '0' then PSTATE.PAN = '1'; BranchTo(MVBAR<31:5>:vect_offset<4:0>, BranchType_EXCEPTION); EndOfInstruction();

Library pseudocode for aarch32/exceptions/traps/AArch32.AArch32SystemAccessTrap

// AArch32.AArch32SystemAccessTrap() // ================================= // Trapped AArch32 System register access other than due to CPTR_EL2 or CPACR_EL1. AArch32.AArch32SystemAccessTrap(bits(2) target_el, bits(32) instr) assert HaveEL(target_el) && target_el != EL0 && UInt(target_el) >= UInt(PSTATE.EL); if !ELUsingAArch32(target_el) || AArch32.GeneralExceptionsToAArch64() then AArch64.AArch32SystemAccessTrap(target_el, instr); assert target_el IN {EL1,EL2}; if target_el == EL2 then exception = AArch32.AArch32SystemAccessTrapSyndrome(instr); AArch32.TakeHypTrapException(exception); else AArch32.TakeUndefInstrException();

Library pseudocode for aarch32/exceptions/traps/AArch32.AArch32SystemAccessTrapSyndrome

// AArch32.AArch32SystemAccessTrapSyndrome() // ========================================= // Return the syndrome information for traps on AArch32 MCR, MCRR, MRC, MRRC, and VMRS instructions, // other than traps that are due to HCPTR or CPACR. ExceptionRecord AArch32.AArch32SystemAccessTrapSyndrome(bits(32) instr) ExceptionRecord exception; cpnum = UInt(instr<11:8>); bits(20) iss = Zeros(); if instr<27:24> == '1110' && instr<4> == '1' && instr<31:28> != '1111' then // MRC/MCR case cpnum of when 10 exception = ExceptionSyndrome(Exception_FPIDTrap); when 14 exception = ExceptionSyndrome(Exception_CP14RTTrap); when 15 exception = ExceptionSyndrome(Exception_CP15RTTrap); otherwise Unreachable(); iss<19:17> = instr<7:5>; // opc2 iss<16:14> = instr<23:21>; // opc1 iss<13:10> = instr<19:16>; // CRn iss<8:5> = instr<15:12>; // Rt iss<4:1> = instr<3:0>; // CRm elsif instr<27:21> == '1100010' && instr<31:28> != '1111' then // MRRC/MCRR case cpnum of when 14 exception = ExceptionSyndrome(Exception_CP14RRTTrap); when 15 exception = ExceptionSyndrome(Exception_CP15RRTTrap); otherwise Unreachable(); iss<19:16> = instr<7:4>; // opc1 iss<13:10> = instr<19:16>; // Rt2 iss<8:5> = instr<15:12>; // Rt iss<4:1> = instr<3:0>; // CRm elsif instr<27:25> == '110' && instr<31:28> != '1111' then // LDC/STC assert cpnum == 14; exception = ExceptionSyndrome(Exception_CP14DTTrap); iss<19:12> = instr<7:0>; // imm8 iss<4> = instr<23>; // U iss<2:1> = instr<24,21>; // P,W if instr<19:16> == '1111' then // Rn==15, LDC(Literal addressing)/STC iss<8:5> = bits(4) UNKNOWN; iss<3> = '1'; else iss<8:5> = instr<19:16>; // Rn iss<3> = '0'; else Unreachable(); iss<0> = instr<20>; // Direction exception.syndrome<24:20> = ConditionSyndrome(); exception.syndrome<19:0> = iss; return exception;

Library pseudocode for aarch32/exceptions/traps/AArch32.CheckAdvSIMDOrFPEnabled

// AArch32.CheckAdvSIMDOrFPEnabled() // ================================= // Check against CPACR, FPEXC, HCPTR, NSACR, and CPTR_EL3. AArch32.CheckAdvSIMDOrFPEnabled(boolean fpexc_check, boolean advsimd) if PSTATE.EL == EL0 && (!HaveEL(EL2) || (!ELUsingAArch32(EL2) && HCR_EL2.TGE == '0')) && !ELUsingAArch32(EL1) then // The PE behaves as if FPEXC.EN is 1 AArch64.CheckFPAdvSIMDEnabled(); elsif PSTATE.EL == EL0 && HaveEL(EL2) && !ELUsingAArch32(EL2) && HCR_EL2.TGE == '1' && !ELUsingAArch32(EL1) then if fpexc_check && HCR_EL2.RW == '0' then fpexc_en = bits(1) IMPLEMENTATION_DEFINED "FPEXC.EN value when TGE==1 and RW==0"; if fpexc_en == '0' then UNDEFINED; AArch64.CheckFPAdvSIMDEnabled(); else cpacr_asedis = CPACR.ASEDIS; cpacr_cp10 = CPACR.cp10; if HaveEL(EL3) && ELUsingAArch32(EL3) && !IsSecure() then // Check if access disabled in NSACR if NSACR.NSASEDIS == '1' then cpacr_asedis = '1'; if NSACR.cp10 == '0' then cpacr_cp10 = '00'; if PSTATE.EL != EL2 then // Check if Advanced SIMD disabled in CPACR if advsimd && cpacr_asedis == '1' then UNDEFINED; if cpacr_cp10 == '10' then (c, cpacr_cp10) = ConstrainUnpredictableBits(Unpredictable_RESCPACR); // Check if access disabled in CPACR case cpacr_cp10 of when '00' disabled = TRUE; when '01' disabled = PSTATE.EL == EL0; when '11' disabled = FALSE; if disabled then UNDEFINED; // If required, check FPEXC enabled bit. if fpexc_check && FPEXC.EN == '0' then UNDEFINED; AArch32.CheckFPAdvSIMDTrap(advsimd); // Also check against HCPTR and CPTR_EL3

Library pseudocode for aarch32/exceptions/traps/AArch32.CheckFPAdvSIMDTrap

// AArch32.CheckFPAdvSIMDTrap() // ============================ // Check against CPTR_EL2 and CPTR_EL3. AArch32.CheckFPAdvSIMDTrap(boolean advsimd) if EL2Enabled() && !ELUsingAArch32(EL2) then AArch64.CheckFPAdvSIMDTrap(); else if HaveEL(EL2) && !IsSecure() then hcptr_tase = HCPTR.TASE; hcptr_cp10 = HCPTR.TCP10; if HaveEL(EL3) && ELUsingAArch32(EL3) && !IsSecure() then // Check if access disabled in NSACR if NSACR.NSASEDIS == '1' then hcptr_tase = '1'; if NSACR.cp10 == '0' then hcptr_cp10 = '1'; // Check if access disabled in HCPTR if (advsimd && hcptr_tase == '1') || hcptr_cp10 == '1' then exception = ExceptionSyndrome(Exception_AdvSIMDFPAccessTrap); exception.syndrome<24:20> = ConditionSyndrome(); if advsimd then exception.syndrome<5> = '1'; else exception.syndrome<5> = '0'; exception.syndrome<3:0> = '1010'; // coproc field, always 0xA if PSTATE.EL == EL2 then AArch32.TakeUndefInstrException(exception); else AArch32.TakeHypTrapException(exception); if HaveEL(EL3) && !ELUsingAArch32(EL3) then // Check if access disabled in CPTR_EL3 if CPTR_EL3.TFP == '1' then AArch64.AdvSIMDFPAccessTrap(EL3); return;

Library pseudocode for aarch32/exceptions/traps/AArch32.CheckForSMCUndefOrTrap

// AArch32.CheckForSMCUndefOrTrap() // ================================ // Check for UNDEFINED or trap on SMC instruction AArch32.CheckForSMCUndefOrTrap() if !HaveEL(EL3) || PSTATE.EL == EL0 then UNDEFINED; if EL2Enabled() && !ELUsingAArch32(EL2) then AArch64.CheckForSMCUndefOrTrap(Zeros(16)); else route_to_hyp = HaveEL(EL2) && !IsSecure() && PSTATE.EL == EL1 && HCR.TSC == '1'; if route_to_hyp then exception = ExceptionSyndrome(Exception_MonitorCall); AArch32.TakeHypTrapException(exception);

Library pseudocode for aarch32/exceptions/traps/AArch32.CheckForWFxTrap

// AArch32.CheckForWFxTrap() // ========================= // Check for trap on WFE or WFI instruction AArch32.CheckForWFxTrap(bits(2) target_el, boolean is_wfe) assert HaveEL(target_el); // Check for routing to AArch64 if !ELUsingAArch32(target_el) then AArch64.CheckForWFxTrap(target_el, is_wfe); return; case target_el of when EL1 trap = (if is_wfe then SCTLR.nTWE else SCTLR.nTWI) == '0'; when EL2 trap = (if is_wfe then HCR.TWE else HCR.TWI) == '1'; when EL3 trap = (if is_wfe then SCR.TWE else SCR.TWI) == '1'; if trap then if target_el == EL1 && EL2Enabled() && !ELUsingAArch32(EL2) && HCR_EL2.TGE == '1' then AArch64.WFxTrap(target_el, is_wfe); if target_el == EL3 then AArch32.TakeMonitorTrapException(); elsif target_el == EL2 then exception = ExceptionSyndrome(Exception_WFxTrap); exception.syndrome<24:20> = ConditionSyndrome(); exception.syndrome<0> = if is_wfe then '1' else '0'; AArch32.TakeHypTrapException(exception); else AArch32.TakeUndefInstrException();

Library pseudocode for aarch32/exceptions/traps/AArch32.CheckITEnabled

// AArch32.CheckITEnabled() // ======================== // Check whether the T32 IT instruction is disabled. AArch32.CheckITEnabled(bits(4) mask) if PSTATE.EL == EL2 then it_disabled = HSCTLR.ITD; else it_disabled = (if ELUsingAArch32(EL1) then SCTLR.ITD else SCTLR[].ITD); if it_disabled == '1' then if mask != '1000' then UNDEFINED; // Otherwise whether the IT block is allowed depends on hw1 of the next instruction. next_instr = AArch32.MemSingle[NextInstrAddr(), 2, AccType_IFETCH, TRUE]; if next_instr IN {'11xxxxxxxxxxxxxx', '1011xxxxxxxxxxxx', '10100xxxxxxxxxxx', '01001xxxxxxxxxxx', '010001xxx1111xxx', '010001xx1xxxx111'} then // It is IMPLEMENTATION DEFINED whether the Undefined Instruction exception is // taken on the IT instruction or the next instruction. This is not reflected in // the pseudocode, which always takes the exception on the IT instruction. This // also does not take into account cases where the next instruction is UNPREDICTABLE. UNDEFINED; return;

Library pseudocode for aarch32/exceptions/traps/AArch32.CheckIllegalState

// AArch32.CheckIllegalState() // =========================== // Check PSTATE.IL bit and generate Illegal Execution state exception if set. AArch32.CheckIllegalState() if AArch32.GeneralExceptionsToAArch64() then AArch64.CheckIllegalState(); elsif PSTATE.IL == '1' then route_to_hyp = EL2Enabled() && PSTATE.EL == EL0 && HCR.TGE == '1'; bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x04; if PSTATE.EL == EL2 || route_to_hyp then exception = ExceptionSyndrome(Exception_IllegalState); if PSTATE.EL == EL2 then AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset); else AArch32.EnterHypMode(exception, preferred_exception_return, 0x14); else AArch32.TakeUndefInstrException();

Library pseudocode for aarch32/exceptions/traps/AArch32.CheckSETENDEnabled

// AArch32.CheckSETENDEnabled() // ============================ // Check whether the AArch32 SETEND instruction is disabled. AArch32.CheckSETENDEnabled() if PSTATE.EL == EL2 then setend_disabled = HSCTLR.SED; else setend_disabled = (if ELUsingAArch32(EL1) then SCTLR.SED else SCTLR[].SED); if setend_disabled == '1' then UNDEFINED; return;

Library pseudocode for aarch32/exceptions/traps/AArch32.TakeHypTrapException

// AArch32.TakeHypTrapException() // ============================== // Exceptions routed to Hyp mode as a Hyp Trap exception. AArch32.TakeHypTrapException(ExceptionRecord exception) assert HaveEL(EL2) && !IsSecure() && ELUsingAArch32(EL2); bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x14; AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch32/exceptions/traps/AArch32.TakeMonitorTrapException

// AArch32.TakeMonitorTrapException() // ================================== // Exceptions routed to Monitor mode as a Monitor Trap exception. AArch32.TakeMonitorTrapException() assert HaveEL(EL3) && ELUsingAArch32(EL3); bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x04; lr_offset = if CurrentInstrSet() == InstrSet_A32 then 4 else 2; AArch32.EnterMonitorMode(preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/traps/AArch32.TakeUndefInstrException

// AArch32.TakeUndefInstrException() // ================================= AArch32.TakeUndefInstrException() exception = ExceptionSyndrome(Exception_Uncategorized); AArch32.TakeUndefInstrException(exception); // AArch32.TakeUndefInstrException() // ================================= AArch32.TakeUndefInstrException(ExceptionRecord exception) route_to_hyp = EL2Enabled() && PSTATE.EL == EL0 && HCR.TGE == '1'; bits(32) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x04; lr_offset = if CurrentInstrSet() == InstrSet_A32 then 4 else 2; if PSTATE.EL == EL2 then AArch32.EnterHypMode(exception, preferred_exception_return, vect_offset); elsif route_to_hyp then AArch32.EnterHypMode(exception, preferred_exception_return, 0x14); else AArch32.EnterMode(M32_Undef, preferred_exception_return, lr_offset, vect_offset);

Library pseudocode for aarch32/exceptions/traps/AArch32.UndefinedFault

// AArch32.UndefinedFault() // ======================== AArch32.UndefinedFault() if AArch32.GeneralExceptionsToAArch64() then AArch64.UndefinedFault(); AArch32.TakeUndefInstrException();

Library pseudocode for aarch32/functions/aborts/AArch32.CreateFaultRecord

// AArch32.CreateFaultRecord() // =========================== FaultRecord AArch32.CreateFaultRecord(Fault type, bits(40) ipaddress, bits(4) domain, integer level, AccType acctype, boolean write, bit extflag, bits(4) debugmoe, bits(2) errortype, boolean secondstage, boolean s2fs1walk) FaultRecord fault; fault.type = type; if (type != Fault_None && PSTATE.EL != EL2 && TTBCR.EAE == '0' && !secondstage && !s2fs1walk && AArch32.DomainValid(type, level)) then fault.domain = domain; else fault.domain = bits(4) UNKNOWN; fault.debugmoe = debugmoe; fault.errortype = errortype; fault.ipaddress.NS = bit UNKNOWN; fault.ipaddress.address = ZeroExtend(ipaddress); fault.level = level; fault.acctype = acctype; fault.write = write; fault.extflag = extflag; fault.secondstage = secondstage; fault.s2fs1walk = s2fs1walk; return fault;

Library pseudocode for aarch32/functions/aborts/AArch32.DomainValid

// AArch32.DomainValid() // ===================== // Returns TRUE if the Domain is valid for a Short-descriptor translation scheme. boolean AArch32.DomainValid(Fault type, integer level) assert type != Fault_None; case type of when Fault_Domain return TRUE; when Fault_Translation, Fault_AccessFlag, Fault_SyncExternalOnWalk, Fault_SyncParityOnWalk return level == 2; otherwise return FALSE;

Library pseudocode for aarch32/functions/aborts/AArch32.FaultStatusLD

// AArch32.FaultStatusLD() // ======================= // Creates an exception fault status value for Abort and Watchpoint exceptions taken // to Abort mode using AArch32 and Long-descriptor format. bits(32) AArch32.FaultStatusLD(boolean d_side, FaultRecord fault) assert fault.type != Fault_None; bits(32) fsr = Zeros(); if HaveRASExt() && IsAsyncAbort(fault) then fsr<15:14> = fault.errortype; if d_side then if fault.acctype IN {AccType_DC, AccType_IC, AccType_AT} then fsr<13> = '1'; fsr<11> = '1'; else fsr<11> = if fault.write then '1' else '0'; if IsExternalAbort(fault) then fsr<12> = fault.extflag; fsr<9> = '1'; fsr<5:0> = EncodeLDFSC(fault.type, fault.level); return fsr;

Library pseudocode for aarch32/functions/aborts/AArch32.FaultStatusSD

// AArch32.FaultStatusSD() // ======================= // Creates an exception fault status value for Abort and Watchpoint exceptions taken // to Abort mode using AArch32 and Short-descriptor format. bits(32) AArch32.FaultStatusSD(boolean d_side, FaultRecord fault) assert fault.type != Fault_None; bits(32) fsr = Zeros(); if HaveRASExt() && IsAsyncAbort(fault) then fsr<15:14> = fault.errortype; if d_side then if fault.acctype IN {AccType_DC, AccType_IC, AccType_AT} then fsr<13> = '1'; fsr<11> = '1'; else fsr<11> = if fault.write then '1' else '0'; if IsExternalAbort(fault) then fsr<12> = fault.extflag; fsr<9> = '0'; fsr<10,3:0> = EncodeSDFSC(fault.type, fault.level); if d_side then fsr<7:4> = fault.domain; // Domain field (data fault only) return fsr;

Library pseudocode for aarch32/functions/aborts/AArch32.FaultSyndrome

// AArch32.FaultSyndrome() // ======================= // Creates an exception syndrome value for Abort and Watchpoint exceptions taken to // AArch32 Hyp mode. bits(25) AArch32.FaultSyndrome(boolean d_side, FaultRecord fault) assert fault.type != Fault_None; bits(25) iss = Zeros(); if HaveRASExt() && IsAsyncAbort(fault) then iss<11:10> = fault.errortype; // AET if d_side then if IsSecondStage(fault) && !fault.s2fs1walk then iss<24:14> = LSInstructionSyndrome(); if fault.acctype IN {AccType_DC, AccType_DC_UNPRIV, AccType_IC, AccType_AT} then iss<8> = '1'; iss<6> = '1'; else iss<6> = if fault.write then '1' else '0'; if IsExternalAbort(fault) then iss<9> = fault.extflag; iss<7> = if fault.s2fs1walk then '1' else '0'; iss<5:0> = EncodeLDFSC(fault.type, fault.level); return iss;

Library pseudocode for aarch32/functions/aborts/EncodeSDFSC

// EncodeSDFSC() // ============= // Function that gives the Short-descriptor FSR code for different types of Fault bits(5) EncodeSDFSC(Fault type, integer level) bits(5) result; case type of when Fault_AccessFlag assert level IN {1,2}; result = if level == 1 then '00011' else '00110'; when Fault_Alignment result = '00001'; when Fault_Permission assert level IN {1,2}; result = if level == 1 then '01101' else '01111'; when Fault_Domain assert level IN {1,2}; result = if level == 1 then '01001' else '01011'; when Fault_Translation assert level IN {1,2}; result = if level == 1 then '00101' else '00111'; when Fault_SyncExternal result = '01000'; when Fault_SyncExternalOnWalk assert level IN {1,2}; result = if level == 1 then '01100' else '01110'; when Fault_SyncParity result = '11001'; when Fault_SyncParityOnWalk assert level IN {1,2}; result = if level == 1 then '11100' else '11110'; when Fault_AsyncParity result = '11000'; when Fault_AsyncExternal result = '10110'; when Fault_Debug result = '00010'; when Fault_TLBConflict result = '10000'; when Fault_Lockdown result = '10100'; // IMPLEMENTATION DEFINED when Fault_Exclusive result = '10101'; // IMPLEMENTATION DEFINED when Fault_ICacheMaint result = '00100'; otherwise Unreachable(); return result;

Library pseudocode for aarch32/functions/common/A32ExpandImm

// A32ExpandImm() // ============== bits(32) A32ExpandImm(bits(12) imm12) // PSTATE.C argument to following function call does not affect the imm32 result. (imm32, -) = A32ExpandImm_C(imm12, PSTATE.C); return imm32;

Library pseudocode for aarch32/functions/common/A32ExpandImm_C

// A32ExpandImm_C() // ================ (bits(32), bit) A32ExpandImm_C(bits(12) imm12, bit carry_in) unrotated_value = ZeroExtend(imm12<7:0>, 32); (imm32, carry_out) = Shift_C(unrotated_value, SRType_ROR, 2*UInt(imm12<11:8>), carry_in); return (imm32, carry_out);

Library pseudocode for aarch32/functions/common/DecodeImmShift

// DecodeImmShift() // ================ (SRType, integer) DecodeImmShift(bits(2) type, bits(5) imm5) case type of when '00' shift_t = SRType_LSL; shift_n = UInt(imm5); when '01' shift_t = SRType_LSR; shift_n = if imm5 == '00000' then 32 else UInt(imm5); when '10' shift_t = SRType_ASR; shift_n = if imm5 == '00000' then 32 else UInt(imm5); when '11' if imm5 == '00000' then shift_t = SRType_RRX; shift_n = 1; else shift_t = SRType_ROR; shift_n = UInt(imm5); return (shift_t, shift_n);

Library pseudocode for aarch32/functions/common/DecodeRegShift

// DecodeRegShift() // ================ SRType DecodeRegShift(bits(2) type) case type of when '00' shift_t = SRType_LSL; when '01' shift_t = SRType_LSR; when '10' shift_t = SRType_ASR; when '11' shift_t = SRType_ROR; return shift_t;

Library pseudocode for aarch32/functions/common/RRX

// RRX() // ===== bits(N) RRX(bits(N) x, bit carry_in) (result, -) = RRX_C(x, carry_in); return result;

Library pseudocode for aarch32/functions/common/RRX_C

// RRX_C() // ======= (bits(N), bit) RRX_C(bits(N) x, bit carry_in) result = carry_in : x<N-1:1>; carry_out = x<0>; return (result, carry_out);

Library pseudocode for aarch32/functions/common/SRType

enumeration SRType {SRType_LSL, SRType_LSR, SRType_ASR, SRType_ROR, SRType_RRX};

Library pseudocode for aarch32/functions/common/Shift

// Shift() // ======= bits(N) Shift(bits(N) value, SRType type, integer amount, bit carry_in) (result, -) = Shift_C(value, type, amount, carry_in); return result;

Library pseudocode for aarch32/functions/common/Shift_C

// Shift_C() // ========= (bits(N), bit) Shift_C(bits(N) value, SRType type, integer amount, bit carry_in) assert !(type == SRType_RRX && amount != 1); if amount == 0 then (result, carry_out) = (value, carry_in); else case type of when SRType_LSL (result, carry_out) = LSL_C(value, amount); when SRType_LSR (result, carry_out) = LSR_C(value, amount); when SRType_ASR (result, carry_out) = ASR_C(value, amount); when SRType_ROR (result, carry_out) = ROR_C(value, amount); when SRType_RRX (result, carry_out) = RRX_C(value, carry_in); return (result, carry_out);

Library pseudocode for aarch32/functions/common/T32ExpandImm

// T32ExpandImm() // ============== bits(32) T32ExpandImm(bits(12) imm12) // PSTATE.C argument to following function call does not affect the imm32 result. (imm32, -) = T32ExpandImm_C(imm12, PSTATE.C); return imm32;

Library pseudocode for aarch32/functions/common/T32ExpandImm_C

// T32ExpandImm_C() // ================ (bits(32), bit) T32ExpandImm_C(bits(12) imm12, bit carry_in) if imm12<11:10> == '00' then case imm12<9:8> of when '00' imm32 = ZeroExtend(imm12<7:0>, 32); when '01' imm32 = '00000000' : imm12<7:0> : '00000000' : imm12<7:0>; when '10' imm32 = imm12<7:0> : '00000000' : imm12<7:0> : '00000000'; when '11' imm32 = imm12<7:0> : imm12<7:0> : imm12<7:0> : imm12<7:0>; carry_out = carry_in; else unrotated_value = ZeroExtend('1':imm12<6:0>, 32); (imm32, carry_out) = ROR_C(unrotated_value, UInt(imm12<11:7>)); return (imm32, carry_out);

Library pseudocode for aarch32/functions/coproc/AArch32.CheckCP15InstrCoarseTraps

// AArch32.CheckCP15InstrCoarseTraps() // =================================== // Check for coarse-grained CP15 traps in HSTR and HCR. boolean AArch32.CheckCP15InstrCoarseTraps(integer CRn, integer nreg, integer CRm) // Check for coarse-grained Hyp traps if EL2Enabled() && PSTATE.EL IN {EL0,EL1} then if PSTATE.EL == EL0 && !ELUsingAArch32(EL2) then return AArch64.CheckCP15InstrCoarseTraps(CRn, nreg, CRm); // Check for MCR, MRC, MCRR and MRRC disabled by HSTR<CRn/CRm> major = if nreg == 1 then CRn else CRm; if !(major IN {4,14}) && HSTR<major> == '1' then return TRUE; // Check for MRC and MCR disabled by HCR.TIDCP if (HCR.TIDCP == '1' && nreg == 1 && ((CRn == 9 && CRm IN {0,1,2, 5,6,7,8 }) || (CRn == 10 && CRm IN {0,1, 4, 8 }) || (CRn == 11 && CRm IN {0,1,2,3,4,5,6,7,8,15}))) then return TRUE; return FALSE;

Library pseudocode for aarch32/functions/coproc/AArch32.CheckSystemAccess

// AArch32.CheckSystemAccess() // =========================== // Check System register access instruction for enables and disables AArch32.CheckSystemAccess(integer cp_num, bits(32) instr) assert cp_num == UInt(instr<11:8>) && (cp_num IN {14,15}); if PSTATE.EL == EL0 && !ELUsingAArch32(EL1) then AArch64.CheckAArch32SystemAccess(instr); return; // Decode the AArch32 System register access instruction if instr<31:28> != '1111' && instr<27:24> == '1110' && instr<4> == '1' then // MRC/MCR cprt = TRUE; cpdt = FALSE; nreg = 1; opc1 = UInt(instr<23:21>); opc2 = UInt(instr<7:5>); CRn = UInt(instr<19:16>); CRm = UInt(instr<3:0>); elsif instr<31:28> != '1111' && instr<27:21> == '1100010' then // MRRC/MCRR cprt = TRUE; cpdt = FALSE; nreg = 2; opc1 = UInt(instr<7:4>); CRm = UInt(instr<3:0>); elsif instr<31:28> != '1111' && instr<27:25> == '110' && instr<22> == '0' then // LDC/STC cprt = FALSE; cpdt = TRUE; nreg = 0; opc1 = 0; CRn = UInt(instr<15:12>); else allocated = FALSE; // // Coarse-grain decode into CP14 or CP15 encoding space. Each of the CPxxxInstrDecode functions // returns TRUE if the instruction is allocated at the current Exception level, FALSE otherwise. if cp_num == 14 then // LDC and STC only supported for c5 in CP14 encoding space if cpdt && CRn != 5 then allocated = FALSE; else // Coarse-grained decode of CP14 based on opc1 field case opc1 of when 0 allocated = CP14DebugInstrDecode(instr); when 1 allocated = CP14TraceInstrDecode(instr); when 7 allocated = CP14JazelleInstrDecode(instr); // JIDR only otherwise allocated = FALSE; // All other values are unallocated elsif cp_num == 15 then // LDC and STC not supported in CP15 encoding space if !cprt then allocated = FALSE; else allocated = CP15InstrDecode(instr); // Coarse-grain traps to EL2 have a higher priority than exceptions generated because // the access instruction is UNDEFINED if AArch32.CheckCP15InstrCoarseTraps(CRn, nreg, CRm) then // For a coarse-grain trap, if it is IMPLEMENTATION DEFINED whether an access from // User mode is UNDEFINED when the trap is disabled, then it is // IMPLEMENTATION DEFINED whether the same access is UNDEFINED or generates a trap // when the trap is enabled. if PSTATE.EL == EL0 && EL2Enabled() && !allocated then if boolean IMPLEMENTATION_DEFINED "UNDEF unallocated CP15 access at EL0" then UNDEFINED; AArch32.AArch32SystemAccessTrap(EL2, instr); else allocated = FALSE; if !allocated then UNDEFINED; // If the instruction is not UNDEFINED, it might be disabled or trapped to a higher EL. AArch32.CheckSystemAccessTraps(instr); return;

Library pseudocode for aarch32/functions/coproc/AArch32.CheckSystemAccessEL1Traps

// AArch32.CheckSystemAccessEL1Traps() // =================================== // Check for configurable disables or traps to EL1 or EL2 of a System register // access instruction. AArch32.CheckSystemAccessEL1Traps(bits(32) instr) assert PSTATE.EL == EL0; if ((HaveEL(EL1) && IsSecure() && !ELUsingAArch32(EL1)) || IsInHost()) then AArch64.CheckAArch32SystemAccessEL1Traps(instr); return; trap = FALSE; // Decode the AArch32 System register access instruction (op, cp_num, opc1, CRn, CRm, opc2, write) = AArch32.DecodeSysRegAccess(instr); if cp_num == 14 then if ((op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm == 5 && opc2 == 0) || // DBGDTRRXint/DBGDTRTXint (op == SystemAccessType_DT && CRn == 5 && opc2 == 0)) then // DBGDTRRXint/DBGDTRTXint (STC/LDC) trap = !Halted() && DBGDSCRext.UDCCdis == '1'; elsif opc1 == 0 then trap = DBGDSCRext.UDCCdis == '1'; elsif opc1 == 1 then trap = CPACR.TRCDIS == '1'; if HaveEL(EL3) && ELUsingAArch32(EL3) && NSACR.NSTRCDIS == '1' then trap = TRUE; elsif cp_num == 15 then if ((op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 12 && opc2 == 0) || // PMCR (op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 12 && opc2 == 1) || // PMCNTENSET (op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 12 && opc2 == 2) || // PMCNTENCLR (op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 12 && opc2 == 3) || // PMOVSR (op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 12 && opc2 == 6) || // PMCEID0 (op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 12 && opc2 == 7) || // PMCEID1 (op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 13 && opc2 == 1) || // PMXEVTYPER (op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 14 && opc2 == 3) || // PMOVSSET (op == SystemAccessType_RT && opc1 == 0 && CRn == 14 && CRm >= 12)) then // PMEVTYPER<n> trap = PMUSERENR.EN == '0'; elsif op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 14 && opc2 == 4 then // PMSWINC trap = PMUSERENR.EN == '0' && PMUSERENR.SW == '0'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 13 && opc2 == 0) || // PMCCNTR (op == SystemAccessType_RRT && opc1 == 0 && CRm == 9)) then // PMCCNTR (MRRC/MCRR) trap = PMUSERENR.EN == '0' && (write || PMUSERENR.CR == '0'); elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 13 && opc2 == 2) || // PMXEVCNTR (op == SystemAccessType_RT && opc1 == 0 && CRn == 14 && CRm >= 8 && CRm <= 11)) then // PMEVCNTR<n> trap = PMUSERENR.EN == '0' && (write || PMUSERENR.ER == '0'); elsif op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 12 && opc2 == 5 then // PMSELR trap = PMUSERENR.EN == '0' && PMUSERENR.ER == '0'; elsif op == SystemAccessType_RT && opc1 == 0 && CRn == 14 && CRm == 2 && opc2 IN {0,1,2} then // CNTP_TVAL CNTP_CTL CNTP_CVAL trap = CNTKCTL.PL0PTEN == '0'; elsif op == SystemAccessType_RT && opc1 == 0 && CRn == 14 && CRm == 0 && opc2 == 0 then // CNTFRQ trap = CNTKCTL.PL0PCTEN == '0' && CNTKCTL.PL0VCTEN == '0'; elsif op == SystemAccessType_RRT && opc1 == 1 && CRm == 14 then // CNTVCT trap = CNTKCTL.PL0VCTEN == '0'; if trap then AArch32.AArch32SystemAccessTrap(EL1, instr);

Library pseudocode for aarch32/functions/coproc/AArch32.CheckSystemAccessEL2Traps

// AArch32.CheckSystemAccessEL2Traps() // =================================== // Check for configurable traps to EL2 of a System register access instruction. AArch32.CheckSystemAccessEL2Traps(bits(32) instr) assert EL2Enabled() && PSTATE.EL IN {EL0, EL1, EL2}; if EL2Enabled() && !ELUsingAArch32(EL2) then AArch64.CheckAArch32SystemAccessEL2Traps(instr); return; trap = FALSE; // Decode the AArch32 System register access instruction (op, cp_num, opc1, CRn, CRm, opc2, write) = AArch32.DecodeSysRegAccess(instr); if cp_num == 14 && PSTATE.EL IN {EL0, EL1} then if ((op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 0 && opc2 == 0) || // DBGDRAR (op == SystemAccessType_RRT && opc1 == 0 && CRm == 1) || // DBGDRAR (MRRC) (op == SystemAccessType_RT && opc1 == 0 && CRn == 2 && CRm == 0 && opc2 == 0) || // DBGDSAR (op == SystemAccessType_RRT && opc1 == 0 && CRm == 2)) then // DBGDSAR (MRRC) trap = HDCR.TDRA == '1' || HDCR.TDE == '1' || HCR.TGE == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 0 && opc2 == 4) || // DBGOSLAR (op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 1 && opc2 == 4) || // DBGOSLSR (op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 3 && opc2 == 4) || // DBGOSDLR (op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 4 && opc2 == 4)) then // DBGPRCR trap = HDCR.TDOSA == '1' || HDCR.TDE == '1' || HCR.TGE == '1'; elsif opc1 == 0 && (!Halted() || !(op == SystemAccessType_RT && CRn == 0 && CRm == 5 && opc2 == 0)) then trap = HDCR.TDA == '1' || HDCR.TDE == '1' || HCR.TGE == '1'; elsif opc1 == 1 then trap = HCPTR.TTA == '1'; if HaveEL(EL3) && ELUsingAArch32(EL3) && NSACR.NSTRCDIS == '1' then trap = TRUE; elsif op == SystemAccessType_RT && opc1 == 7 && CRn == 0 && CRm == 0 && opc2 == 0 then // JIDR trap = HCR.TID0 == '1'; elsif cp_num == 14 && PSTATE.EL == EL2 then if opc1 == 1 then trap = HCPTR.TTA == '1'; elsif cp_num == 15 && PSTATE.EL IN {EL0, EL1} then if ((op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 0 && opc2 == 0) || // SCTLR (op == SystemAccessType_RT && opc1 == 0 && CRn == 2 && CRm == 0 && opc2 == 0) || // TTBR0 (op == SystemAccessType_RRT && opc1 == 0 && CRm == 2) || // TTBR0 (MRRC/MCCR) (op == SystemAccessType_RT && opc1 == 0 && CRn == 2 && CRm == 0 && opc2 == 1) || // TTBR1 (op == SystemAccessType_RRT && opc1 == 1 && CRm == 2) || // TTBR1 (MRRC/MCCR) (op == SystemAccessType_RT && opc1 == 0 && CRn == 2 && CRm == 0 && opc2 == 2) || // TTBCR (op == SystemAccessType_RT && opc1 == 0 && CRn == 2 && CRm == 0 && opc2 == 3) || // TTBCR2 (op == SystemAccessType_RT && opc1 == 0 && CRn == 3 && CRm == 0 && opc2 == 0) || // DACR (op == SystemAccessType_RT && opc1 == 0 && CRn == 5 && CRm == 0 && opc2 == 0) || // DFSR (op == SystemAccessType_RT && opc1 == 0 && CRn == 5 && CRm == 0 && opc2 == 1) || // IFSR (op == SystemAccessType_RT && opc1 == 0 && CRn == 6 && CRm == 0 && opc2 == 0) || // DFAR (op == SystemAccessType_RT && opc1 == 0 && CRn == 6 && CRm == 0 && opc2 == 2) || // IFAR (op == SystemAccessType_RT && opc1 == 0 && CRn == 5 && CRm == 1 && opc2 == 0) || // ADFSR (op == SystemAccessType_RT && opc1 == 0 && CRn == 5 && CRm == 1 && opc2 == 1) || // AIFSR (op == SystemAccessType_RT && opc1 == 0 && CRn == 10 && CRm == 2 && opc2 == 0) || // PRRR/MAIR0 (op == SystemAccessType_RT && opc1 == 0 && CRn == 10 && CRm == 2 && opc2 == 1) || // NMRR/MAIR1 (op == SystemAccessType_RT && opc1 == 0 && CRn == 10 && CRm == 3 && opc2 == 0) || // AMAIR0 (op == SystemAccessType_RT && opc1 == 0 && CRn == 10 && CRm == 3 && opc2 == 1) || // AMAIR1 (op == SystemAccessType_RT && opc1 == 0 && CRn == 13 && CRm == 0 && opc2 == 1)) then // CONTEXTIDR trap = if write then HCR.TVM == '1' else HCR.TRVM == '1'; elsif op == SystemAccessType_RT && opc1 == 0 && CRn == 8 then // TLBI trap = write && HCR.TTLB == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 6 && opc2 == 2) || // DCISW (op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 10 && opc2 == 2) || // DCCSW (op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 14 && opc2 == 2)) then // DCCISW trap = write && HCR.TSW == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 6 && opc2 == 1) || // DCIMVAC (op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 10 && opc2 == 1) || // DCCMVAC (op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 14 && opc2 == 1)) then // DCCIMVAC trap = write && HCR.TPC == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 5 && opc2 == 1) || // ICIMVAU (op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 5 && opc2 == 0) || // ICIALLU (op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 1 && opc2 == 0) || // ICIALLUIS (op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 11 && opc2 == 1)) then // DCCMVAU trap = write && HCR.TPU == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 0 && opc2 == 1) || // ACTLR (op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 0 && opc2 == 3)) then // ACTLR2 trap = HCR.TAC == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm == 0 && opc2 == 2) || // TCMTR (op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm == 0 && opc2 == 3) || // TLBTR (op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm == 0 && opc2 == 6) || // REVIDR (op == SystemAccessType_RT && opc1 == 1 && CRn == 0 && CRm == 0 && opc2 == 7)) then // AIDR trap = HCR.TID1 == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm == 0 && opc2 == 1) || // CTR (op == SystemAccessType_RT && opc1 == 1 && CRn == 0 && CRm == 0 && opc2 == 0) || // CCSIDR (op == SystemAccessType_RT && opc1 == 1 && CRn == 0 && CRm == 0 && opc2 == 2) || // CCSIDR2 (op == SystemAccessType_RT && opc1 == 1 && CRn == 0 && CRm == 0 && opc2 == 1) || // CLIDR (op == SystemAccessType_RT && opc1 == 2 && CRn == 0 && CRm == 0 && opc2 == 0)) then // CSSELR trap = HCR.TID2 == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm == 1) || // ID_* (op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm == 2 && opc2 <= 7) || // ID_* (op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm >= 3 && opc2 <= 1) || // Reserved (op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm == 3 && opc2 == 2) || // Reserved (op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm == 5 && opc2 IN {4,5})) then // Reserved trap = HCR.TID3 == '1'; elsif op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 0 && opc2 == 2 then // CPACR trap = HCPTR.TCPAC == '1'; elsif op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 12 && opc2 == 0 then // PMCR trap = HDCR.TPMCR == '1' || HDCR.TPM == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 14 && CRm >= 8) || // PMEVCNTR<n>/PMEVTYPER<n> (op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm IN {12,13,14}) || // PM* (op == SystemAccessType_RRT && opc1 == 0 && CRm == 9)) then // PMCCNTR (MRRC/MCCR) trap = HDCR.TPM == '1'; elsif op == SystemAccessType_RT && opc1 == 0 && CRn == 14 && CRm == 2 && opc2 IN {0,1,2} then // CNTP_TVAL CNTP_CTL CNTP_CVAL trap = CNTHCTL.PL1PCEN == '0'; elsif op == SystemAccessType_RRT && opc1 == 0 && CRm == 14 then // CNTPCT trap = CNTHCTL.PL1PCTEN == '0'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 1 && opc2 == 0) || // SCR (op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 1 && opc2 == 2) || // NSACR (op == SystemAccessType_RT && opc1 == 0 && CRn == 12 && CRm == 0 && opc2 == 1) || // MVBAR (op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 3 && opc2 == 1) || // SDCR (op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 8 && opc2 >= 4)) then // ATS12NSOxx trap = IsSecureEL2Enabled() && PSTATE.EL == EL1 && IsSecure() && ELUsingAArch32(EL1); if trap then AArch32.AArch32SystemAccessTrap(EL2, instr);

Library pseudocode for aarch32/functions/coproc/AArch32.CheckSystemAccessTraps

// AArch32.CheckSystemAccessTraps() // ================================ // Check for configurable disables or traps to a higher EL of an System register access. AArch32.CheckSystemAccessTraps(bits(32) instr) if PSTATE.EL == EL0 then AArch32.CheckSystemAccessEL1Traps(instr); if EL2Enabled() && PSTATE.EL IN {EL0, EL1, EL2} && !IsInHost() then AArch32.CheckSystemAccessEL2Traps(instr); if HaveEL(EL3) && !ELUsingAArch32(EL3) && PSTATE.EL IN {EL0,EL1,EL2} then AArch64.CheckAArch32SystemAccessEL3Traps(instr);

Library pseudocode for aarch32/functions/coproc/AArch32.DecodeSysRegAccess

// AArch32.DecodeSysRegAccess() // ============================ // Decode an AArch32 System register access instruction into its operands. (SystemAccessType, integer, integer, integer, integer, integer, boolean) AArch32.DecodeSysRegAccess(bits(32) instr) cp_num = UInt(instr<11:8>); // Decode the AArch32 System register access instruction if instr<31:28> != '1111' && instr<27:24> == '1110' && instr<4> == '1' then // MRC/MCR op = SystemAccessType_RT; opc1 = UInt(instr<23:21>); opc2 = UInt(instr<7:5>); CRn = UInt(instr<19:16>); CRm = UInt(instr<3:0>); write = instr<20> == '0'; elsif instr<31:28> != '1111' && instr<27:21> == '1100010' then // MRRC/MCRR op = SystemAccessType_RRT; opc1 = UInt(instr<7:4>); CRm = UInt(instr<3:0>); write = instr<20> == '0'; elsif instr<31:28> != '1111' && instr<27:25> == '110' then // LDC/STC op = SystemAccessType_DT; CRn = UInt(instr<15:12>); write = instr<20> == '0'; return (op, cp_num, opc1, CRn, CRm, opc2, write);

Library pseudocode for aarch32/functions/coproc/CP14DebugInstrDecode

// Decodes an accepted access to a debug System register in the CP14 encoding space. // Returns TRUE if the instruction is allocated at the current Exception level, FALSE otherwise. boolean CP14DebugInstrDecode(bits(32) instr);

Library pseudocode for aarch32/functions/coproc/CP14JazelleInstrDecode

// Decodes an accepted access to a Jazelle System register in the CP14 encoding space. // Returns TRUE if the instruction is allocated at the current Exception level, FALSE otherwise. boolean CP14JazelleInstrDecode(bits(32) instr);

Library pseudocode for aarch32/functions/coproc/CP14TraceInstrDecode

// Decodes an accepted access to a trace System register in the CP14 encoding space. // Returns TRUE if the instruction is allocated at the current Exception level, FALSE otherwise. boolean CP14TraceInstrDecode(bits(32) instr);

Library pseudocode for aarch32/functions/coproc/CP15InstrDecode

// Decodes an accepted access to a System register in the CP15 encoding space. // Returns TRUE if the instruction is allocated at the current Exception level, FALSE otherwise. boolean CP15InstrDecode(bits(32) instr);

Library pseudocode for aarch32/functions/exclusive/AArch32.ExclusiveMonitorsPass

// AArch32.ExclusiveMonitorsPass() // =============================== // Return TRUE if the Exclusives monitors for the current PE include all of the addresses // associated with the virtual address region of size bytes starting at address. // The immediately following memory write must be to the same addresses. boolean AArch32.ExclusiveMonitorsPass(bits(32) address, integer size) // It is IMPLEMENTATION DEFINED whether the detection of memory aborts happens // before or after the check on the local Exclusives monitor. As a result a failure // of the local monitor can occur on some implementations even if the memory // access would give an memory abort. acctype = AccType_ATOMIC; iswrite = TRUE; aligned = (address == Align(address, size)); if !aligned then secondstage = FALSE; AArch32.Abort(address, AArch32.AlignmentFault(acctype, iswrite, secondstage)); passed = AArch32.IsExclusiveVA(address, ProcessorID(), size); if !passed then return FALSE; memaddrdesc = AArch32.TranslateAddress(address, acctype, iswrite, aligned, size); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch32.Abort(address, memaddrdesc.fault); passed = IsExclusiveLocal(memaddrdesc.paddress, ProcessorID(), size); if passed then ClearExclusiveLocal(ProcessorID()); if memaddrdesc.memattrs.shareable then passed = IsExclusiveGlobal(memaddrdesc.paddress, ProcessorID(), size); return passed;

Library pseudocode for aarch32/functions/exclusive/AArch32.IsExclusiveVA

// An optional IMPLEMENTATION DEFINED test for an exclusive access to a virtual // address region of size bytes starting at address. // // It is permitted (but not required) for this function to return FALSE and // cause a store exclusive to fail if the virtual address region is not // totally included within the region recorded by MarkExclusiveVA(). // // It is always safe to return TRUE which will check the physical address only. boolean AArch32.IsExclusiveVA(bits(32) address, integer processorid, integer size);

Library pseudocode for aarch32/functions/exclusive/AArch32.MarkExclusiveVA

// Optionally record an exclusive access to the virtual address region of size bytes // starting at address for processorid. AArch32.MarkExclusiveVA(bits(32) address, integer processorid, integer size);

Library pseudocode for aarch32/functions/exclusive/AArch32.SetExclusiveMonitors

// AArch32.SetExclusiveMonitors() // ============================== // Sets the Exclusives monitors for the current PE to record the addresses associated // with the virtual address region of size bytes starting at address. AArch32.SetExclusiveMonitors(bits(32) address, integer size) acctype = AccType_ATOMIC; iswrite = FALSE; aligned = (address == Align(address, size)); memaddrdesc = AArch32.TranslateAddress(address, acctype, iswrite, aligned, size); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then return; if memaddrdesc.memattrs.shareable then MarkExclusiveGlobal(memaddrdesc.paddress, ProcessorID(), size); MarkExclusiveLocal(memaddrdesc.paddress, ProcessorID(), size); AArch32.MarkExclusiveVA(address, ProcessorID(), size);

Library pseudocode for aarch32/functions/float/CheckAdvSIMDEnabled

// CheckAdvSIMDEnabled() // ===================== CheckAdvSIMDEnabled() fpexc_check = TRUE; advsimd = TRUE; AArch32.CheckAdvSIMDOrFPEnabled(fpexc_check, advsimd); // Return from CheckAdvSIMDOrFPEnabled() occurs only if Advanced SIMD access is permitted // Make temporary copy of D registers // _Dclone[] is used as input data for instruction pseudocode for i = 0 to 31 _Dclone[i] = D[i]; return;

Library pseudocode for aarch32/functions/float/CheckAdvSIMDOrVFPEnabled

// CheckAdvSIMDOrVFPEnabled() // ========================== CheckAdvSIMDOrVFPEnabled(boolean include_fpexc_check, boolean advsimd) AArch32.CheckAdvSIMDOrFPEnabled(include_fpexc_check, advsimd); // Return from CheckAdvSIMDOrFPEnabled() occurs only if VFP access is permitted return;

Library pseudocode for aarch32/functions/float/CheckCryptoEnabled32

// CheckCryptoEnabled32() // ====================== CheckCryptoEnabled32() CheckAdvSIMDEnabled(); // Return from CheckAdvSIMDEnabled() occurs only if access is permitted return;

Library pseudocode for aarch32/functions/float/CheckVFPEnabled

// CheckVFPEnabled() // ================= CheckVFPEnabled(boolean include_fpexc_check) advsimd = FALSE; AArch32.CheckAdvSIMDOrFPEnabled(include_fpexc_check, advsimd); // Return from CheckAdvSIMDOrFPEnabled() occurs only if VFP access is permitted return;

Library pseudocode for aarch32/functions/float/FPHalvedSub

// FPHalvedSub() // ============= bits(N) FPHalvedSub(bits(N) op1, bits(N) op2, FPCRType fpcr) assert N IN {16,32,64}; rounding = FPRoundingMode(fpcr); (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); (done,result) = FPProcessNaNs(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 == FPType_Infinity); inf2 = (type2 == FPType_Infinity); zero1 = (type1 == FPType_Zero); zero2 = (type2 == FPType_Zero); if inf1 && inf2 && sign1 == sign2 then result = FPDefaultNaN(); FPProcessException(FPExc_InvalidOp, fpcr); elsif (inf1 && sign1 == '0') || (inf2 && sign2 == '1') then result = FPInfinity('0'); elsif (inf1 && sign1 == '1') || (inf2 && sign2 == '0') then result = FPInfinity('1'); elsif zero1 && zero2 && sign1 != sign2 then result = FPZero(sign1); else result_value = (value1 - value2) / 2.0; if result_value == 0.0 then // Sign of exact zero result depends on rounding mode result_sign = if rounding == FPRounding_NEGINF then '1' else '0'; result = FPZero(result_sign); else result = FPRound(result_value, fpcr); return result;

Library pseudocode for aarch32/functions/float/FPRSqrtStep

// FPRSqrtStep() // ============= bits(N) FPRSqrtStep(bits(N) op1, bits(N) op2) assert N IN {16,32}; FPCRType fpcr = StandardFPSCRValue(); (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); (done,result) = FPProcessNaNs(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 == FPType_Infinity); inf2 = (type2 == FPType_Infinity); zero1 = (type1 == FPType_Zero); zero2 = (type2 == FPType_Zero); bits(N) product; if (inf1 && zero2) || (zero1 && inf2) then product = FPZero('0'); else product = FPMul(op1, op2, fpcr); bits(N) three = FPThree('0'); result = FPHalvedSub(three, product, fpcr); return result;

Library pseudocode for aarch32/functions/float/FPRecipStep

// FPRecipStep() // ============= bits(N) FPRecipStep(bits(N) op1, bits(N) op2) assert N IN {16,32}; FPCRType fpcr = StandardFPSCRValue(); (type1,sign1,value1) = FPUnpack(op1, fpcr); (type2,sign2,value2) = FPUnpack(op2, fpcr); (done,result) = FPProcessNaNs(type1, type2, op1, op2, fpcr); if !done then inf1 = (type1 == FPType_Infinity); inf2 = (type2 == FPType_Infinity); zero1 = (type1 == FPType_Zero); zero2 = (type2 == FPType_Zero); bits(N) product; if (inf1 && zero2) || (zero1 && inf2) then product = FPZero('0'); else product = FPMul(op1, op2, fpcr); bits(N) two = FPTwo('0'); result = FPSub(two, product, fpcr); return result;

Library pseudocode for aarch32/functions/float/StandardFPSCRValue

// StandardFPSCRValue() // ==================== FPCRType StandardFPSCRValue() return '00000' : FPSCR.AHP : '110000' : FPSCR.FZ16 : '0000000000000000000';

Library pseudocode for aarch32/functions/memory/AArch32.CheckAlignment

// AArch32.CheckAlignment() // ======================== boolean AArch32.CheckAlignment(bits(32) address, integer alignment, AccType acctype, boolean iswrite) if PSTATE.EL == EL0 && !ELUsingAArch32(S1TranslationRegime()) then A = SCTLR[].A; //use AArch64 register, when higher Exception level is using AArch64 elsif PSTATE.EL == EL2 then A = HSCTLR.A; else A = SCTLR.A; aligned = (address == Align(address, alignment)); atomic = acctype IN { AccType_ATOMIC, AccType_ATOMICRW, AccType_ORDEREDATOMIC, AccType_ORDEREDATOMICRW }; ordered = acctype IN { AccType_ORDERED, AccType_ORDEREDRW, AccType_LIMITEDORDERED, AccType_ORDEREDATOMIC, AccType_ORDEREDATOMICRW }; vector = acctype == AccType_VEC; // AccType_VEC is used for SIMD element alignment checks only check = (atomic || ordered || vector || A == '1'); if check && !aligned then secondstage = FALSE; AArch32.Abort(address, AArch32.AlignmentFault(acctype, iswrite, secondstage)); return aligned;

Library pseudocode for aarch32/functions/memory/AArch32.MemSingle

// AArch32.MemSingle[] - non-assignment (read) form // ================================================ // Perform an atomic, little-endian read of 'size' bytes. bits(size*8) AArch32.MemSingle[bits(32) address, integer size, AccType acctype, boolean wasaligned] assert size IN {1, 2, 4, 8, 16}; assert address == Align(address, size); AddressDescriptor memaddrdesc; bits(size*8) value; iswrite = FALSE; // MMU or MPU memaddrdesc = AArch32.TranslateAddress(address, acctype, iswrite, wasaligned, size); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch32.Abort(address, memaddrdesc.fault); // Memory array access accdesc = CreateAccessDescriptor(acctype); if HaveMTEExt() then if AccessIsTagChecked(ZeroExtend(address, 64), acctype) then bits(4) ptag = TransformTag(ZeroExtend(address, 64)); if !CheckTag(memaddrdesc, ptag, iswrite) then TagCheckFail(ZeroExtend(address, 64), iswrite); value = _Mem[memaddrdesc, size, accdesc]; return value; // AArch32.MemSingle[] - assignment (write) form // ============================================= // Perform an atomic, little-endian write of 'size' bytes. AArch32.MemSingle[bits(32) address, integer size, AccType acctype, boolean wasaligned] = bits(size*8) value assert size IN {1, 2, 4, 8, 16}; assert address == Align(address, size); AddressDescriptor memaddrdesc; iswrite = TRUE; // MMU or MPU memaddrdesc = AArch32.TranslateAddress(address, acctype, iswrite, wasaligned, size); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch32.Abort(address, memaddrdesc.fault); // Effect on exclusives if memaddrdesc.memattrs.shareable then ClearExclusiveByAddress(memaddrdesc.paddress, ProcessorID(), size); // Memory array access accdesc = CreateAccessDescriptor(acctype); if HaveMTEExt() then if AccessIsTagChecked(ZeroExtend(address, 64), acctype) then bits(4) ptag = TransformTag(ZeroExtend(address, 64)); if !CheckTag(memaddrdesc, ptag, iswrite) then TagCheckFail(ZeroExtend(address, 64), iswrite); _Mem[memaddrdesc, size, accdesc] = value; return;

Library pseudocode for aarch32/functions/memory/AddressWithAllocationTag

// AddressWithAllocationTag() // ========================== // Generate a 64-bit value containing a Logical Address Tag from a 64-bit // virtual address and an Allocation Tag. // If the extension is disabled, treats the Allocation Tag as ’0000’. bits(64) AddressWithAllocationTag(bits(64) address, bits(4) allocation_tag) bits(64) result = address; bits(4) tag = allocation_tag - ('000':address<55>); result<59:56> = tag; return result;

Library pseudocode for aarch32/functions/memory/AllocationTagFromAddress

// AllocationTagFromAddress() // ========================== // Generate a Tag from a 64-bit value containing a Logical Address Tag. // If access to Allocation Tags is disabled, this function returns ’0000’. bits(4) AllocationTagFromAddress(bits(64) tagged_address) bits(4) logical_tag = tagged_address<59:56>; bits(4) tag = logical_tag + ('000':tagged_address<55>); return tag;

Library pseudocode for aarch32/functions/memory/CheckTag

// CheckTag() // ========== // Performs a Tag Check operation for a memory access and returns // whether the check passed boolean CheckTag(AddressDescriptor memaddrdesc, bits(4) ptag, boolean write) if memaddrdesc.memattrs.tagged then bits(64) paddress = ZeroExtend(memaddrdesc.paddress.address); return ptag == MemTag[paddress]; else return TRUE;

Library pseudocode for aarch32/functions/memory/Hint_PreloadData

Hint_PreloadData(bits(32) address);

Library pseudocode for aarch32/functions/memory/Hint_PreloadDataForWrite

Hint_PreloadDataForWrite(bits(32) address);

Library pseudocode for aarch32/functions/memory/Hint_PreloadInstr

Hint_PreloadInstr(bits(32) address);

Library pseudocode for aarch32/functions/memory/MemA

// MemA[] - non-assignment form // ============================ bits(8*size) MemA[bits(32) address, integer size] acctype = AccType_ATOMIC; return Mem_with_type[address, size, acctype]; // MemA[] - assignment form // ======================== MemA[bits(32) address, integer size] = bits(8*size) value acctype = AccType_ATOMIC; Mem_with_type[address, size, acctype] = value; return;

Library pseudocode for aarch32/functions/memory/MemO

// MemO[] - non-assignment form // ============================ bits(8*size) MemO[bits(32) address, integer size] acctype = AccType_ORDERED; return Mem_with_type[address, size, acctype]; // MemO[] - assignment form // ======================== MemO[bits(32) address, integer size] = bits(8*size) value acctype = AccType_ORDERED; Mem_with_type[address, size, acctype] = value; return;

Library pseudocode for aarch32/functions/memory/MemTag

// MemTag[] - non-assignment (read) form // ===================================== // Load an Allocation Tag from memory. bits(4) MemTag[bits(64) address] AddressDescriptor memaddrdesc; bits(4) value; iswrite = FALSE; memaddrdesc = AArch64.TranslateAddress(address, AccType_NORMAL, iswrite, TRUE, TAG_GRANULE); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch64.Abort(address, memaddrdesc.fault); // Return the granule tag if tagging is enabled... if AllocationTagAccessIsEnabled() then return _MemTag[memaddrdesc]; else // ...otherwise read tag as zero. return '0000'; // MemTag[] - assignment (write) form // ================================== // Store an Allocation Tag to memory. MemTag[bits(64) address] = bits(4) value AddressDescriptor memaddrdesc; iswrite = TRUE; // Stores of allocation tags must be aligned if address != Align(address, TAG_GRANULE) then boolean secondstage = FALSE; AArch64.Abort(address, AArch64.AlignmentFault(AccType_NORMAL, iswrite, secondstage)); wasaligned = TRUE; memaddrdesc = AArch64.TranslateAddress(address, AccType_NORMAL, iswrite, wasaligned, TAG_GRANULE); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch64.Abort(address, memaddrdesc.fault); // Memory array access if AllocationTagAccessIsEnabled() then _MemTag[memaddrdesc] = value;

Library pseudocode for aarch32/functions/memory/MemU

// MemU[] - non-assignment form // ============================ bits(8*size) MemU[bits(32) address, integer size] acctype = AccType_NORMAL; return Mem_with_type[address, size, acctype]; // MemU[] - assignment form // ======================== MemU[bits(32) address, integer size] = bits(8*size) value acctype = AccType_NORMAL; Mem_with_type[address, size, acctype] = value; return;

Library pseudocode for aarch32/functions/memory/MemU_unpriv

// MemU_unpriv[] - non-assignment form // =================================== bits(8*size) MemU_unpriv[bits(32) address, integer size] acctype = AccType_UNPRIV; return Mem_with_type[address, size, acctype]; // MemU_unpriv[] - assignment form // =============================== MemU_unpriv[bits(32) address, integer size] = bits(8*size) value acctype = AccType_UNPRIV; Mem_with_type[address, size, acctype] = value; return;

Library pseudocode for aarch32/functions/memory/Mem_with_type

// Mem_with_type[] - non-assignment (read) form // ============================================ // Perform a read of 'size' bytes. The access byte order is reversed for a big-endian access. // Instruction fetches would call AArch32.MemSingle directly. bits(size*8) Mem_with_type[bits(32) address, integer size, AccType acctype] assert size IN {1, 2, 4, 8, 16}; bits(size*8) value; boolean iswrite = FALSE; aligned = AArch32.CheckAlignment(address, size, acctype, iswrite); if !aligned then assert size > 1; value<7:0> = AArch32.MemSingle[address, 1, acctype, aligned]; // For subsequent bytes it is CONSTRAINED UNPREDICTABLE whether an unaligned Device memory // access will generate an Alignment Fault, as to get this far means the first byte did // not, so we must be changing to a new translation page. c = ConstrainUnpredictable(Unpredictable_DEVPAGE2); assert c IN {Constraint_FAULT, Constraint_NONE}; if c == Constraint_NONE then aligned = TRUE; for i = 1 to size-1 value<8*i+7:8*i> = AArch32.MemSingle[address+i, 1, acctype, aligned]; else value = AArch32.MemSingle[address, size, acctype, aligned]; if BigEndian() then value = BigEndianReverse(value); return value; // Mem_with_type[] - assignment (write) form // ========================================= // Perform a write of 'size' bytes. The byte order is reversed for a big-endian access. Mem_with_type[bits(32) address, integer size, AccType acctype] = bits(size*8) value boolean iswrite = TRUE; if BigEndian() then value = BigEndianReverse(value); aligned = AArch32.CheckAlignment(address, size, acctype, iswrite); if !aligned then assert size > 1; AArch32.MemSingle[address, 1, acctype, aligned] = value<7:0>; // For subsequent bytes it is CONSTRAINED UNPREDICTABLE whether an unaligned Device memory // access will generate an Alignment Fault, as to get this far means the first byte did // not, so we must be changing to a new translation page. c = ConstrainUnpredictable(Unpredictable_DEVPAGE2); assert c IN {Constraint_FAULT, Constraint_NONE}; if c == Constraint_NONE then aligned = TRUE; for i = 1 to size-1 AArch32.MemSingle[address+i, 1, acctype, aligned] = value<8*i+7:8*i>; else AArch32.MemSingle[address, size, acctype, aligned] = value; return;

Library pseudocode for aarch32/functions/memory/TransformTag

// TransformTag() // ============== // Apply tag transformation rules. bits(4) TransformTag(bits(64) vaddr) bits(4) vtag = vaddr<59:56>; bits(4) tagdelta = ZeroExtend(vaddr<55>); bits(4) ptag = vtag + tagdelta; return ptag;

Library pseudocode for aarch32/functions/memory/boolean

// boolean AccessIsTagChecked() // ============================ // TRUE if a given access is tag-checked, FALSE otherwise. boolean AccessIsTagChecked(bits(64) vaddr, AccType acctype) if PSTATE.M<4> == '1' then return FALSE; if EffectiveTBI(vaddr, FALSE, PSTATE.EL) == '0' then return FALSE; if EffectiveTCMA(vaddr, PSTATE.EL) == '1' && (vaddr<59:55> == '00000' || vaddr<59:55> == '11111') then return FALSE; if !AllocationTagAccessIsEnabled() then return FALSE; if acctype IN {AccType_IFETCH, AccType_PTW} then return FALSE; if acctype == AccType_NV2REGISTER then return FALSE; if PSTATE.TCO=='1' then return FALSE; if IsNonTagCheckedInstruction() then return FALSE; return TRUE;

Library pseudocode for aarch32/functions/ras/AArch32.ESBOperation

// AArch32.ESBOperation() // ====================== // Perform the AArch32 ESB operation for ESB executed in AArch32 state AArch32.ESBOperation() // Check if routed to AArch64 state route_to_aarch64 = PSTATE.EL == EL0 && !ELUsingAArch32(EL1); if !route_to_aarch64 && EL2Enabled() && !ELUsingAArch32(EL2) then route_to_aarch64 = HCR_EL2.TGE == '1' || HCR_EL2.AMO == '1'; if !route_to_aarch64 && HaveEL(EL3) && !ELUsingAArch32(EL3) then route_to_aarch64 = SCR_EL3.EA == '1'; if route_to_aarch64 then AArch64.ESBOperation(); return; route_to_monitor = HaveEL(EL3) && ELUsingAArch32(EL3) && SCR.EA == '1'; route_to_hyp = EL2Enabled() && PSTATE.EL IN {EL0,EL1} && (HCR.TGE == '1' || HCR.AMO == '1'); if route_to_monitor then target = M32_Monitor; elsif route_to_hyp || PSTATE.M == M32_Hyp then target = M32_Hyp; else target = M32_Abort; if IsSecure() then mask_active = TRUE; elsif target == M32_Monitor then mask_active = SCR.AW == '1' && (!HaveEL(EL2) || (HCR.TGE == '0' && HCR.AMO == '0')); else mask_active = target == M32_Abort || PSTATE.M == M32_Hyp; mask_set = PSTATE.A == '1'; (-, el) = ELFromM32(target); intdis = Halted() || ExternalDebugInterruptsDisabled(el); masked = intdis || (mask_active && mask_set); // Check for a masked Physical SError pending if IsPhysicalSErrorPending() && masked then syndrome32 = AArch32.PhysicalSErrorSyndrome(); DISR = AArch32.ReportDeferredSError(syndrome32.AET, syndrome32.ExT); ClearPendingPhysicalSError(); return;

Library pseudocode for aarch32/functions/ras/AArch32.PhysicalSErrorSyndrome

// Return the SError syndrome AArch32.SErrorSyndrome AArch32.PhysicalSErrorSyndrome();

Library pseudocode for aarch32/functions/ras/AArch32.ReportDeferredSError

// AArch32.ReportDeferredSError() // ============================== // Return deferred SError syndrome bits(32) AArch32.ReportDeferredSError(bits(2) AET, bit ExT) bits(32) target; target<31> = '1'; // A syndrome = Zeros(16); if PSTATE.EL == EL2 then syndrome<11:10> = AET; // AET syndrome<9> = ExT; // EA syndrome<5:0> = '010001'; // DFSC else syndrome<15:14> = AET; // AET syndrome<12> = ExT; // ExT syndrome<9> = TTBCR.EAE; // LPAE if TTBCR.EAE == '1' then // Long-descriptor format syndrome<5:0> = '010001'; // STATUS else // Short-descriptor format syndrome<10,3:0> = '10110'; // FS if HaveAnyAArch64() then target<24:0> = ZeroExtend(syndrome);// Any RES0 fields must be set to zero else target<15:0> = syndrome; return target;

Library pseudocode for aarch32/functions/ras/AArch32.SErrorSyndrome

type AArch32.SErrorSyndrome is ( bits(2) AET, bit ExT )

Library pseudocode for aarch32/functions/ras/AArch32.vESBOperation

// AArch32.vESBOperation() // ======================= // Perform the ESB operation for virtual SError interrupts executed in AArch32 state AArch32.vESBOperation() assert EL2Enabled() && PSTATE.EL IN {EL0,EL1}; // Check for EL2 using AArch64 state if !ELUsingAArch32(EL2) then AArch64.vESBOperation(); return; // If physical SError interrupts are routed to Hyp mode, and TGE is not set, then a // virtual SError interrupt might be pending vSEI_enabled = HCR.TGE == '0' && HCR.AMO == '1'; vSEI_pending = vSEI_enabled && HCR.VA == '1'; vintdis = Halted() || ExternalDebugInterruptsDisabled(EL1); vmasked = vintdis || PSTATE.A == '1'; // Check for a masked virtual SError pending if vSEI_pending && vmasked then VDISR = AArch32.ReportDeferredSError(VDFSR<15:14>, VDFSR<12>); HCR.VA = '0'; // Clear pending virtual SError return;

Library pseudocode for aarch32/functions/registers/AArch32.ResetGeneralRegisters

// AArch32.ResetGeneralRegisters() // =============================== AArch32.ResetGeneralRegisters() for i = 0 to 7 R[i] = bits(32) UNKNOWN; for i = 8 to 12 Rmode[i, M32_User] = bits(32) UNKNOWN; Rmode[i, M32_FIQ] = bits(32) UNKNOWN; if HaveEL(EL2) then Rmode[13, M32_Hyp] = bits(32) UNKNOWN; // No R14_hyp for i = 13 to 14 Rmode[i, M32_User] = bits(32) UNKNOWN; Rmode[i, M32_FIQ] = bits(32) UNKNOWN; Rmode[i, M32_IRQ] = bits(32) UNKNOWN; Rmode[i, M32_Svc] = bits(32) UNKNOWN; Rmode[i, M32_Abort] = bits(32) UNKNOWN; Rmode[i, M32_Undef] = bits(32) UNKNOWN; if HaveEL(EL3) then Rmode[i, M32_Monitor] = bits(32) UNKNOWN; return;

Library pseudocode for aarch32/functions/registers/AArch32.ResetSIMDFPRegisters

// AArch32.ResetSIMDFPRegisters() // ============================== AArch32.ResetSIMDFPRegisters() for i = 0 to 15 Q[i] = bits(128) UNKNOWN; return;

Library pseudocode for aarch32/functions/registers/AArch32.ResetSpecialRegisters

// AArch32.ResetSpecialRegisters() // =============================== AArch32.ResetSpecialRegisters() // AArch32 special registers SPSR_fiq = bits(32) UNKNOWN; SPSR_irq = bits(32) UNKNOWN; SPSR_svc = bits(32) UNKNOWN; SPSR_abt = bits(32) UNKNOWN; SPSR_und = bits(32) UNKNOWN; if HaveEL(EL2) then SPSR_hyp = bits(32) UNKNOWN; ELR_hyp = bits(32) UNKNOWN; if HaveEL(EL3) then SPSR_mon = bits(32) UNKNOWN; // External debug special registers DLR = bits(32) UNKNOWN; DSPSR = bits(32) UNKNOWN; return;

Library pseudocode for aarch32/functions/registers/AArch32.ResetSystemRegisters

AArch32.ResetSystemRegisters(boolean cold_reset);

Library pseudocode for aarch32/functions/registers/ALUExceptionReturn

// ALUExceptionReturn() // ==================== ALUExceptionReturn(bits(32) address) if PSTATE.EL == EL2 then UNDEFINED; elsif PSTATE.M IN {M32_User,M32_System} then UNPREDICTABLE; // UNDEFINED or NOP else AArch32.ExceptionReturn(address, SPSR[]);

Library pseudocode for aarch32/functions/registers/ALUWritePC

// ALUWritePC() // ============ ALUWritePC(bits(32) address) if CurrentInstrSet() == InstrSet_A32 then BXWritePC(address, BranchType_INDIR); else BranchWritePC(address, BranchType_INDIR);

Library pseudocode for aarch32/functions/registers/BXWritePC

// BXWritePC() // =========== BXWritePC(bits(32) address, BranchType branch_type) if address<0> == '1' then SelectInstrSet(InstrSet_T32); address<0> = '0'; else SelectInstrSet(InstrSet_A32); // For branches to an unaligned PC counter in A32 state, the processor takes the branch // and does one of: // * Forces the address to be aligned // * Leaves the PC unaligned, meaning the target generates a PC Alignment fault. if address<1> == '1' && ConstrainUnpredictableBool(Unpredictable_A32FORCEALIGNPC) then address<1> = '0'; BranchTo(address, branch_type);

Library pseudocode for aarch32/functions/registers/BranchWritePC

// BranchWritePC() // =============== BranchWritePC(bits(32) address, BranchType branch_type) if CurrentInstrSet() == InstrSet_A32 then address<1:0> = '00'; else address<0> = '0'; BranchTo(address, branch_type);

Library pseudocode for aarch32/functions/registers/D

// D[] - non-assignment form // ========================= bits(64) D[integer n] assert n >= 0 && n <= 31; base = (n MOD 2) * 64; return _V[n DIV 2]<base+63:base>; // D[] - assignment form // ===================== D[integer n] = bits(64) value assert n >= 0 && n <= 31; base = (n MOD 2) * 64; _V[n DIV 2]<base+63:base> = value; return;

Library pseudocode for aarch32/functions/registers/Din

// Din[] - non-assignment form // =========================== bits(64) Din[integer n] assert n >= 0 && n <= 31; return _Dclone[n];

Library pseudocode for aarch32/functions/registers/LR

// LR - assignment form // ==================== LR = bits(32) value R[14] = value; return; // LR - non-assignment form // ======================== bits(32) LR return R[14];

Library pseudocode for aarch32/functions/registers/LoadWritePC

// LoadWritePC() // ============= LoadWritePC(bits(32) address) BXWritePC(address, BranchType_INDIR);

Library pseudocode for aarch32/functions/registers/LookUpRIndex

// LookUpRIndex() // ============== integer LookUpRIndex(integer n, bits(5) mode) assert n >= 0 && n <= 14; case n of // Select index by mode: usr fiq irq svc abt und hyp when 8 result = RBankSelect(mode, 8, 24, 8, 8, 8, 8, 8); when 9 result = RBankSelect(mode, 9, 25, 9, 9, 9, 9, 9); when 10 result = RBankSelect(mode, 10, 26, 10, 10, 10, 10, 10); when 11 result = RBankSelect(mode, 11, 27, 11, 11, 11, 11, 11); when 12 result = RBankSelect(mode, 12, 28, 12, 12, 12, 12, 12); when 13 result = RBankSelect(mode, 13, 29, 17, 19, 21, 23, 15); when 14 result = RBankSelect(mode, 14, 30, 16, 18, 20, 22, 14); otherwise result = n; return result;

Library pseudocode for aarch32/functions/registers/Monitor_mode_registers

bits(32) SP_mon; bits(32) LR_mon;

Library pseudocode for aarch32/functions/registers/PC

// PC - non-assignment form // ======================== bits(32) PC return R[15]; // This includes the offset from AArch32 state

Library pseudocode for aarch32/functions/registers/PCStoreValue

// PCStoreValue() // ============== bits(32) PCStoreValue() // This function returns the PC value. On architecture versions before ARMv7, it // is permitted to instead return PC+4, provided it does so consistently. It is // used only to describe A32 instructions, so it returns the address of the current // instruction plus 8 (normally) or 12 (when the alternative is permitted). return PC;

Library pseudocode for aarch32/functions/registers/Q

// Q[] - non-assignment form // ========================= bits(128) Q[integer n] assert n >= 0 && n <= 15; return _V[n]; // Q[] - assignment form // ===================== Q[integer n] = bits(128) value assert n >= 0 && n <= 15; _V[n] = value; return;

Library pseudocode for aarch32/functions/registers/Qin

// Qin[] - non-assignment form // =========================== bits(128) Qin[integer n] assert n >= 0 && n <= 15; return Din[2*n+1]:Din[2*n];

Library pseudocode for aarch32/functions/registers/R

// R[] - assignment form // ===================== R[integer n] = bits(32) value Rmode[n, PSTATE.M] = value; return; // R[] - non-assignment form // ========================= bits(32) R[integer n] if n == 15 then offset = (if CurrentInstrSet() == InstrSet_A32 then 8 else 4); return _PC<31:0> + offset; else return Rmode[n, PSTATE.M];

Library pseudocode for aarch32/functions/registers/RBankSelect

// RBankSelect() // ============= integer RBankSelect(bits(5) mode, integer usr, integer fiq, integer irq, integer svc, integer abt, integer und, integer hyp) case mode of when M32_User result = usr; // User mode when M32_FIQ result = fiq; // FIQ mode when M32_IRQ result = irq; // IRQ mode when M32_Svc result = svc; // Supervisor mode when M32_Abort result = abt; // Abort mode when M32_Hyp result = hyp; // Hyp mode when M32_Undef result = und; // Undefined mode when M32_System result = usr; // System mode uses User mode registers otherwise Unreachable(); // Monitor mode return result;

Library pseudocode for aarch32/functions/registers/Rmode

// Rmode[] - non-assignment form // ============================= bits(32) Rmode[integer n, bits(5) mode] assert n >= 0 && n <= 14; // Check for attempted use of Monitor mode in Non-secure state. if !IsSecure() then assert mode != M32_Monitor; assert !BadMode(mode); if mode == M32_Monitor then if n == 13 then return SP_mon; elsif n == 14 then return LR_mon; else return _R[n]<31:0>; else return _R[LookUpRIndex(n, mode)]<31:0>; // Rmode[] - assignment form // ========================= Rmode[integer n, bits(5) mode] = bits(32) value assert n >= 0 && n <= 14; // Check for attempted use of Monitor mode in Non-secure state. if !IsSecure() then assert mode != M32_Monitor; assert !BadMode(mode); if mode == M32_Monitor then if n == 13 then SP_mon = value; elsif n == 14 then LR_mon = value; else _R[n]<31:0> = value; else // It is CONSTRAINED UNPREDICTABLE whether the upper 32 bits of the X // register are unchanged or set to zero. This is also tested for on // exception entry, as this applies to all AArch32 registers. if !HighestELUsingAArch32() && ConstrainUnpredictableBool(Unpredictable_ZEROUPPER) then _R[LookUpRIndex(n, mode)] = ZeroExtend(value); else _R[LookUpRIndex(n, mode)]<31:0> = value; return;

Library pseudocode for aarch32/functions/registers/S

// S[] - non-assignment form // ========================= bits(32) S[integer n] assert n >= 0 && n <= 31; base = (n MOD 4) * 32; return _V[n DIV 4]<base+31:base>; // S[] - assignment form // ===================== S[integer n] = bits(32) value assert n >= 0 && n <= 31; base = (n MOD 4) * 32; _V[n DIV 4]<base+31:base> = value; return;

Library pseudocode for aarch32/functions/registers/SP

// SP - assignment form // ==================== SP = bits(32) value R[13] = value; return; // SP - non-assignment form // ======================== bits(32) SP return R[13];

Library pseudocode for aarch32/functions/registers/_Dclone

array bits(64) _Dclone[0..31];

Library pseudocode for aarch32/functions/system/AArch32.ExceptionReturn

// AArch32.ExceptionReturn() // ========================= AArch32.ExceptionReturn(bits(32) new_pc, bits(32) spsr) SynchronizeContext(); // Attempts to change to an illegal mode or state will invoke the Illegal Execution state // mechanism SetPSTATEFromPSR(spsr); ClearExclusiveLocal(ProcessorID()); SendEventLocal(); if PSTATE.IL == '1' then // If the exception return is illegal, PC[1:0] are UNKNOWN new_pc<1:0> = bits(2) UNKNOWN; else // LR[1:0] or LR[0] are treated as being 0, depending on the target instruction set state if PSTATE.T == '1' then new_pc<0> = '0'; // T32 else new_pc<1:0> = '00'; // A32 BranchTo(new_pc, BranchType_ERET);

Library pseudocode for aarch32/functions/system/AArch32.ExecutingATS1xPInstr

// AArch32.ExecutingATS1xPInstr() // ============================== // Return TRUE if current instruction is AT S1CPR/WP boolean AArch32.ExecutingATS1xPInstr() if !HavePrivATExt() then return FALSE; instr = ThisInstr(); if instr<24+:4> == '1110' && instr<8+:4> == '1110' then op1 = instr<21+:3>; CRn = instr<16+:4>; CRm = instr<0+:4>; op2 = instr<5+:3>; return (op1 == '000' && CRn == '0111' && CRm == '1001' && op2 IN {'000','001'}); else return FALSE;

Library pseudocode for aarch32/functions/system/AArch32.ExecutingCP10or11Instr

// AArch32.ExecutingCP10or11Instr() // ================================ boolean AArch32.ExecutingCP10or11Instr() instr = ThisInstr(); instr_set = CurrentInstrSet(); assert instr_set IN {InstrSet_A32, InstrSet_T32}; if instr_set == InstrSet_A32 then return ((instr<27:24> == '1110' || instr<27:25> == '110') && instr<11:8> == '101x'); else // InstrSet_T32 return (instr<31:28> == '111x' && (instr<27:24> == '1110' || instr<27:25> == '110') && instr<11:8> == '101x');

Library pseudocode for aarch32/functions/system/AArch32.ExecutingLSMInstr

// AArch32.ExecutingLSMInstr() // =========================== // Returns TRUE if processor is executing a Load/Store Multiple instruction boolean AArch32.ExecutingLSMInstr() instr = ThisInstr(); instr_set = CurrentInstrSet(); assert instr_set IN {InstrSet_A32, InstrSet_T32}; if instr_set == InstrSet_A32 then return (instr<28+:4> != '1111' && instr<25+:3> == '100'); else // InstrSet_T32 if ThisInstrLength() == 16 then return (instr<12+:4> == '1100'); else return (instr<25+:7> == '1110100' && instr<22> == '0');

Library pseudocode for aarch32/functions/system/AArch32.ITAdvance

// AArch32.ITAdvance() // =================== AArch32.ITAdvance() if PSTATE.IT<2:0> == '000' then PSTATE.IT = '00000000'; else PSTATE.IT<4:0> = LSL(PSTATE.IT<4:0>, 1); return;

Library pseudocode for aarch32/functions/system/AArch32.SysRegRead

// Read from a 32-bit AArch32 System register and return the register's contents. bits(32) AArch32.SysRegRead(integer cp_num, bits(32) instr);

Library pseudocode for aarch32/functions/system/AArch32.SysRegRead64

// Read from a 64-bit AArch32 System register and return the register's contents. bits(64) AArch32.SysRegRead64(integer cp_num, bits(32) instr);

Library pseudocode for aarch32/functions/system/AArch32.SysRegReadCanWriteAPSR

// AArch32.SysRegReadCanWriteAPSR() // ================================ // Determines whether the AArch32 System register read instruction can write to APSR flags. boolean AArch32.SysRegReadCanWriteAPSR(integer cp_num, bits(32) instr) assert UsingAArch32(); assert (cp_num IN {14,15}); assert cp_num == UInt(instr<11:8>); opc1 = UInt(instr<23:21>); opc2 = UInt(instr<7:5>); CRn = UInt(instr<19:16>); CRm = UInt(instr<3:0>); if cp_num == 14 && opc1 == 0 && CRn == 0 && CRm == 1 && opc2 == 0 then // DBGDSCRint return TRUE; return FALSE;

Library pseudocode for aarch32/functions/system/AArch32.SysRegWrite

// Write to a 32-bit AArch32 System register. AArch32.SysRegWrite(integer cp_num, bits(32) instr, bits(32) val);

Library pseudocode for aarch32/functions/system/AArch32.SysRegWrite64

// Write to a 64-bit AArch32 System register. AArch32.SysRegWrite64(integer cp_num, bits(32) instr, bits(64) val);

Library pseudocode for aarch32/functions/system/AArch32.WriteMode

// AArch32.WriteMode() // =================== // Function for dealing with writes to PSTATE.M from AArch32 state only. // This ensures that PSTATE.EL and PSTATE.SP are always valid. AArch32.WriteMode(bits(5) mode) (valid,el) = ELFromM32(mode); assert valid; PSTATE.M = mode; PSTATE.EL = el; PSTATE.nRW = '1'; PSTATE.SP = (if mode IN {M32_User,M32_System} then '0' else '1'); return;

Library pseudocode for aarch32/functions/system/AArch32.WriteModeByInstr

// AArch32.WriteModeByInstr() // ========================== // Function for dealing with writes to PSTATE.M from an AArch32 instruction, and ensuring that // illegal state changes are correctly flagged in PSTATE.IL. AArch32.WriteModeByInstr(bits(5) mode) (valid,el) = ELFromM32(mode); // 'valid' is set to FALSE if' mode' is invalid for this implementation or the current value // of SCR.NS/SCR_EL3.NS. Additionally, it is illegal for an instruction to write 'mode' to // PSTATE.EL if it would result in any of: // * A change to a mode that would cause entry to a higher Exception level. if UInt(el) > UInt(PSTATE.EL) then valid = FALSE; // * A change to or from Hyp mode. if (PSTATE.M == M32_Hyp || mode == M32_Hyp) && PSTATE.M != mode then valid = FALSE; // * When EL2 is implemented, the value of HCR.TGE is '1', a change to a Non-secure EL1 mode. if PSTATE.M == M32_Monitor && HaveEL(EL2) && el == EL1 && SCR.NS == '1' && HCR.TGE == '1' then valid = FALSE; if !valid then PSTATE.IL = '1'; else AArch32.WriteMode(mode);

Library pseudocode for aarch32/functions/system/BadMode

// BadMode() // ========= boolean BadMode(bits(5) mode) // Return TRUE if 'mode' encodes a mode that is not valid for this implementation case mode of when M32_Monitor valid = HaveAArch32EL(EL3); when M32_Hyp valid = HaveAArch32EL(EL2); when M32_FIQ, M32_IRQ, M32_Svc, M32_Abort, M32_Undef, M32_System // If EL3 is implemented and using AArch32, then these modes are EL3 modes in Secure // state, and EL1 modes in Non-secure state. If EL3 is not implemented or is using // AArch64, then these modes are EL1 modes. // Therefore it is sufficient to test this implementation supports EL1 using AArch32. valid = HaveAArch32EL(EL1); when M32_User valid = HaveAArch32EL(EL0); otherwise valid = FALSE; // Passed an illegal mode value return !valid;

Library pseudocode for aarch32/functions/system/BankedRegisterAccessValid

// BankedRegisterAccessValid() // =========================== // Checks for MRS (Banked register) or MSR (Banked register) accesses to registers // other than the SPSRs that are invalid. This includes ELR_hyp accesses. BankedRegisterAccessValid(bits(5) SYSm, bits(5) mode) case SYSm of when '000xx', '00100' // R8_usr to R12_usr if mode != M32_FIQ then UNPREDICTABLE; when '00101' // SP_usr if mode == M32_System then UNPREDICTABLE; when '00110' // LR_usr if mode IN {M32_Hyp,M32_System} then UNPREDICTABLE; when '010xx', '0110x', '01110' // R8_fiq to R12_fiq, SP_fiq, LR_fiq if mode == M32_FIQ then UNPREDICTABLE; when '1000x' // LR_irq, SP_irq if mode == M32_IRQ then UNPREDICTABLE; when '1001x' // LR_svc, SP_svc if mode == M32_Svc then UNPREDICTABLE; when '1010x' // LR_abt, SP_abt if mode == M32_Abort then UNPREDICTABLE; when '1011x' // LR_und, SP_und if mode == M32_Undef then UNPREDICTABLE; when '1110x' // LR_mon, SP_mon if !HaveEL(EL3) || !IsSecure() || mode == M32_Monitor then UNPREDICTABLE; when '11110' // ELR_hyp, only from Monitor or Hyp mode if !HaveEL(EL2) || !(mode IN {M32_Monitor,M32_Hyp}) then UNPREDICTABLE; when '11111' // SP_hyp, only from Monitor mode if !HaveEL(EL2) || mode != M32_Monitor then UNPREDICTABLE; otherwise UNPREDICTABLE; return;

Library pseudocode for aarch32/functions/system/CPSRWriteByInstr

// CPSRWriteByInstr() // ================== // Update PSTATE.<N,Z,C,V,Q,GE,E,A,I,F,M> from a CPSR value written by an MSR instruction. CPSRWriteByInstr(bits(32) value, bits(4) bytemask) privileged = PSTATE.EL != EL0; // PSTATE.<A,I,F,M> are not writable at EL0 // Write PSTATE from 'value', ignoring bytes masked by 'bytemask' if bytemask<3> == '1' then PSTATE.<N,Z,C,V,Q> = value<31:27>; // Bits <26:24> are ignored if bytemask<2> == '1' then // Bit <23> is RES0 if privileged then PSTATE.PAN = value<22>; // Bits <21:20> are RES0 PSTATE.GE = value<19:16>; if bytemask<1> == '1' then // Bits <15:10> are RES0 PSTATE.E = value<9>; // PSTATE.E is writable at EL0 if privileged then PSTATE.A = value<8>; if bytemask<0> == '1' then if privileged then PSTATE.<I,F> = value<7:6>; // Bit <5> is RES0 // AArch32.WriteModeByInstr() sets PSTATE.IL to 1 if this is an illegal mode change. AArch32.WriteModeByInstr(value<4:0>); return;

Library pseudocode for aarch32/functions/system/ConditionPassed

// ConditionPassed() // ================= boolean ConditionPassed() return ConditionHolds(AArch32.CurrentCond());

Library pseudocode for aarch32/functions/system/CurrentCond

bits(4) AArch32.CurrentCond();

Library pseudocode for aarch32/functions/system/InITBlock

// InITBlock() // =========== boolean InITBlock() if CurrentInstrSet() == InstrSet_T32 then return PSTATE.IT<3:0> != '0000'; else return FALSE;

Library pseudocode for aarch32/functions/system/LastInITBlock

// LastInITBlock() // =============== boolean LastInITBlock() return (PSTATE.IT<3:0> == '1000');

Library pseudocode for aarch32/functions/system/SPSRWriteByInstr

// SPSRWriteByInstr() // ================== SPSRWriteByInstr(bits(32) value, bits(4) bytemask) new_spsr = SPSR[]; if bytemask<3> == '1' then new_spsr<31:24> = value<31:24>; // N,Z,C,V,Q flags, IT[1:0],J bits if bytemask<2> == '1' then new_spsr<23:16> = value<23:16>; // IL bit, GE[3:0] flags if bytemask<1> == '1' then new_spsr<15:8> = value<15:8>; // IT[7:2] bits, E bit, A interrupt mask if bytemask<0> == '1' then new_spsr<7:0> = value<7:0>; // I,F interrupt masks, T bit, Mode bits SPSR[] = new_spsr; // UNPREDICTABLE if User or System mode return;

Library pseudocode for aarch32/functions/system/SPSRaccessValid

// SPSRaccessValid() // ================= // Checks for MRS (Banked register) or MSR (Banked register) accesses to the SPSRs // that are UNPREDICTABLE SPSRaccessValid(bits(5) SYSm, bits(5) mode) case SYSm of when '01110' // SPSR_fiq if mode == M32_FIQ then UNPREDICTABLE; when '10000' // SPSR_irq if mode == M32_IRQ then UNPREDICTABLE; when '10010' // SPSR_svc if mode == M32_Svc then UNPREDICTABLE; when '10100' // SPSR_abt if mode == M32_Abort then UNPREDICTABLE; when '10110' // SPSR_und if mode == M32_Undef then UNPREDICTABLE; when '11100' // SPSR_mon if !HaveEL(EL3) || mode == M32_Monitor || !IsSecure() then UNPREDICTABLE; when '11110' // SPSR_hyp if !HaveEL(EL2) || mode != M32_Monitor then UNPREDICTABLE; otherwise UNPREDICTABLE; return;

Library pseudocode for aarch32/functions/system/SelectInstrSet

// SelectInstrSet() // ================ SelectInstrSet(InstrSet iset) assert CurrentInstrSet() IN {InstrSet_A32, InstrSet_T32}; assert iset IN {InstrSet_A32, InstrSet_T32}; PSTATE.T = if iset == InstrSet_A32 then '0' else '1'; return;

Library pseudocode for aarch32/functions/v6simd/Sat

// Sat() // ===== bits(N) Sat(integer i, integer N, boolean unsigned) result = if unsigned then UnsignedSat(i, N) else SignedSat(i, N); return result;

Library pseudocode for aarch32/functions/v6simd/SignedSat

// SignedSat() // =========== bits(N) SignedSat(integer i, integer N) (result, -) = SignedSatQ(i, N); return result;

Library pseudocode for aarch32/functions/v6simd/UnsignedSat

// UnsignedSat() // ============= bits(N) UnsignedSat(integer i, integer N) (result, -) = UnsignedSatQ(i, N); return result;

Library pseudocode for aarch32/translation/attrs/AArch32.DefaultTEXDecode

// AArch32.DefaultTEXDecode() // ========================== MemoryAttributes AArch32.DefaultTEXDecode(bits(3) TEX, bit C, bit B, bit S, AccType acctype) MemoryAttributes memattrs; // Reserved values map to allocated values if (TEX == '001' && C:B == '01') || (TEX == '010' && C:B != '00') || TEX == '011' then bits(5) texcb; (-, texcb) = ConstrainUnpredictableBits(Unpredictable_RESTEXCB); TEX = texcb<4:2>; C = texcb<1>; B = texcb<0>; case TEX:C:B of when '00000' // Device-nGnRnE memattrs.type = MemType_Device; memattrs.device = DeviceType_nGnRnE; when '00001', '01000' // Device-nGnRE memattrs.type = MemType_Device; memattrs.device = DeviceType_nGnRE; when '00010', '00011', '00100' // Write-back or Write-through Read allocate, or Non-cacheable memattrs.type = MemType_Normal; memattrs.inner = ShortConvertAttrsHints(C:B, acctype, FALSE); memattrs.outer = ShortConvertAttrsHints(C:B, acctype, FALSE); memattrs.shareable = (S == '1'); when '00110' memattrs = MemoryAttributes IMPLEMENTATION_DEFINED; when '00111' // Write-back Read and Write allocate memattrs.type = MemType_Normal; memattrs.inner = ShortConvertAttrsHints('01', acctype, FALSE); memattrs.outer = ShortConvertAttrsHints('01', acctype, FALSE); memattrs.shareable = (S == '1'); when '1xxxx' // Cacheable, TEX<1:0> = Outer attrs, {C,B} = Inner attrs memattrs.type = MemType_Normal; memattrs.inner = ShortConvertAttrsHints(C:B, acctype, FALSE); memattrs.outer = ShortConvertAttrsHints(TEX<1:0>, acctype, FALSE); memattrs.shareable = (S == '1'); otherwise // Reserved, handled above Unreachable(); // transient bits are not supported in this format memattrs.inner.transient = FALSE; memattrs.outer.transient = FALSE; // distinction between inner and outer shareable is not supported in this format memattrs.outershareable = memattrs.shareable; memattrs.tagged = FALSE; return MemAttrDefaults(memattrs);

Library pseudocode for aarch32/translation/attrs/AArch32.InstructionDevice

// AArch32.InstructionDevice() // =========================== // Instruction fetches from memory marked as Device but not execute-never might generate a // Permission Fault but are otherwise treated as if from Normal Non-cacheable memory. AddressDescriptor AArch32.InstructionDevice(AddressDescriptor addrdesc, bits(32) vaddress, bits(40) ipaddress, integer level, bits(4) domain, AccType acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) c = ConstrainUnpredictable(Unpredictable_INSTRDEVICE); assert c IN {Constraint_NONE, Constraint_FAULT}; if c == Constraint_FAULT then addrdesc.fault = AArch32.PermissionFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); else addrdesc.memattrs.type = MemType_Normal; addrdesc.memattrs.inner.attrs = MemAttr_NC; addrdesc.memattrs.inner.hints = MemHint_No; addrdesc.memattrs.outer = addrdesc.memattrs.inner; addrdesc.memattrs.tagged = FALSE; addrdesc.memattrs = MemAttrDefaults(addrdesc.memattrs); return addrdesc;

Library pseudocode for aarch32/translation/attrs/AArch32.RemappedTEXDecode

// AArch32.RemappedTEXDecode() // =========================== MemoryAttributes AArch32.RemappedTEXDecode(bits(3) TEX, bit C, bit B, bit S, AccType acctype) MemoryAttributes memattrs; region = UInt(TEX<0>:C:B); // TEX<2:1> are ignored in this mapping scheme if region == 6 then memattrs = MemoryAttributes IMPLEMENTATION_DEFINED; else base = 2 * region; attrfield = PRRR<base+1:base>; if attrfield == '11' then // Reserved, maps to allocated value (-, attrfield) = ConstrainUnpredictableBits(Unpredictable_RESPRRR); case attrfield of when '00' // Device-nGnRnE memattrs.type = MemType_Device; memattrs.device = DeviceType_nGnRnE; when '01' // Device-nGnRE memattrs.type = MemType_Device; memattrs.device = DeviceType_nGnRE; when '10' memattrs.type = MemType_Normal; memattrs.inner = ShortConvertAttrsHints(NMRR<base+1:base>, acctype, FALSE); memattrs.outer = ShortConvertAttrsHints(NMRR<base+17:base+16>, acctype, FALSE); s_bit = if S == '0' then PRRR.NS0 else PRRR.NS1; memattrs.shareable = (s_bit == '1'); memattrs.outershareable = (s_bit == '1' && PRRR<region+24> == '0'); when '11' Unreachable(); // transient bits are not supported in this format memattrs.inner.transient = FALSE; memattrs.outer.transient = FALSE; memattrs.tagged = FALSE; return MemAttrDefaults(memattrs);

Library pseudocode for aarch32/translation/attrs/AArch32.S1AttrDecode

// AArch32.S1AttrDecode() // ====================== // Converts the Stage 1 attribute fields, using the MAIR, to orthogonal // attributes and hints. MemoryAttributes AArch32.S1AttrDecode(bits(2) SH, bits(3) attr, AccType acctype) MemoryAttributes memattrs; if PSTATE.EL == EL2 then mair = HMAIR1:HMAIR0; else mair = MAIR1:MAIR0; index = 8 * UInt(attr); attrfield = mair<index+7:index>; memattrs.tagged = FALSE; if ((attrfield<7:4> != '0000' && attrfield<7:4> != '1111' && attrfield<3:0> == '0000') || (attrfield<7:4> == '0000' && attrfield<3:0> != 'xx00')) then // Reserved, maps to an allocated value (-, attrfield) = ConstrainUnpredictableBits(Unpredictable_RESMAIR); if !HaveMTEExt() && attrfield<7:4> == '1111' && attrfield<3:0> == '0000' then // Reserved, maps to an allocated value (-, attrfield) = ConstrainUnpredictableBits(Unpredictable_RESMAIR); if attrfield<7:4> == '0000' then // Device memattrs.type = MemType_Device; case attrfield<3:0> of when '0000' memattrs.device = DeviceType_nGnRnE; when '0100' memattrs.device = DeviceType_nGnRE; when '1000' memattrs.device = DeviceType_nGRE; when '1100' memattrs.device = DeviceType_GRE; otherwise Unreachable(); // Reserved, handled above elsif attrfield<3:0> != '0000' then // Normal memattrs.type = MemType_Normal; memattrs.outer = LongConvertAttrsHints(attrfield<7:4>, acctype); memattrs.inner = LongConvertAttrsHints(attrfield<3:0>, acctype); memattrs.shareable = SH<1> == '1'; memattrs.outershareable = SH == '10'; elsif HaveMTEExt() && attrfield == '11110000' then // Tagged, Normal memattrs.tagged = TRUE; memattrs.type = MemType_Normal; memattrs.outer.attrs = MemAttr_WB; memattrs.inner.attrs = MemAttr_WB; memattrs.outer.hints = MemHint_RWA; memattrs.inner.hints = MemHint_RWA; memattrs.shareable = SH<1> == '1'; memattrs.outershareable = SH == '10'; else Unreachable(); // Reserved, handled above return MemAttrDefaults(memattrs);

Library pseudocode for aarch32/translation/attrs/AArch32.TranslateAddressS1Off

// AArch32.TranslateAddressS1Off() // =============================== // Called for stage 1 translations when translation is disabled to supply a default translation. // Note that there are additional constraints on instruction prefetching that are not described in // this pseudocode. TLBRecord AArch32.TranslateAddressS1Off(bits(32) vaddress, AccType acctype, boolean iswrite) assert ELUsingAArch32(S1TranslationRegime()); TLBRecord result; default_cacheable = (HasS2Translation() && ((if ELUsingAArch32(EL2) then HCR.DC else HCR_EL2.DC) == '1')); if default_cacheable then // Use default cacheable settings result.addrdesc.memattrs.type = MemType_Normal; result.addrdesc.memattrs.inner.attrs = MemAttr_WB; // Write-back result.addrdesc.memattrs.inner.hints = MemHint_RWA; result.addrdesc.memattrs.shareable = FALSE; result.addrdesc.memattrs.outershareable = FALSE; result.addrdesc.memattrs.tagged = HCR_EL2.DCT == '1'; elsif acctype != AccType_IFETCH then // Treat data as Device result.addrdesc.memattrs.type = MemType_Device; result.addrdesc.memattrs.device = DeviceType_nGnRnE; result.addrdesc.memattrs.inner = MemAttrHints UNKNOWN; result.addrdesc.memattrs.tagged = FALSE; else // Instruction cacheability controlled by SCTLR/HSCTLR.I if PSTATE.EL == EL2 then cacheable = HSCTLR.I == '1'; else cacheable = SCTLR.I == '1'; result.addrdesc.memattrs.type = MemType_Normal; if cacheable then result.addrdesc.memattrs.inner.attrs = MemAttr_WT; result.addrdesc.memattrs.inner.hints = MemHint_RA; else result.addrdesc.memattrs.inner.attrs = MemAttr_NC; result.addrdesc.memattrs.inner.hints = MemHint_No; result.addrdesc.memattrs.shareable = TRUE; result.addrdesc.memattrs.outershareable = TRUE; result.addrdesc.memattrs.tagged = FALSE; result.addrdesc.memattrs.outer = result.addrdesc.memattrs.inner; result.addrdesc.memattrs = MemAttrDefaults(result.addrdesc.memattrs); result.perms.ap = bits(3) UNKNOWN; result.perms.xn = '0'; result.perms.pxn = '0'; result.nG = bit UNKNOWN; result.contiguous = boolean UNKNOWN; result.domain = bits(4) UNKNOWN; result.level = integer UNKNOWN; result.blocksize = integer UNKNOWN; result.addrdesc.paddress.address = ZeroExtend(vaddress); result.addrdesc.paddress.NS = if IsSecure() then '0' else '1'; result.addrdesc.fault = AArch32.NoFault(); return result;

Library pseudocode for aarch32/translation/checks/AArch32.AccessIsPrivileged

// AArch32.AccessIsPrivileged() // ============================ boolean AArch32.AccessIsPrivileged(AccType acctype) el = AArch32.AccessUsesEL(acctype); if el == EL0 then ispriv = FALSE; elsif el != EL1 then ispriv = TRUE; else ispriv = (acctype != AccType_UNPRIV); return ispriv;

Library pseudocode for aarch32/translation/checks/AArch32.AccessUsesEL

// AArch32.AccessUsesEL() // ====================== // Returns the Exception Level of the regime that will manage the translation for a given access type. bits(2) AArch32.AccessUsesEL(AccType acctype) if acctype == AccType_UNPRIV then return EL0; else return PSTATE.EL;

Library pseudocode for aarch32/translation/checks/AArch32.CheckDomain

// AArch32.CheckDomain() // ===================== (boolean, FaultRecord) AArch32.CheckDomain(bits(4) domain, bits(32) vaddress, integer level, AccType acctype, boolean iswrite) index = 2 * UInt(domain); attrfield = DACR<index+1:index>; if attrfield == '10' then // Reserved, maps to an allocated value // Reserved value maps to an allocated value (-, attrfield) = ConstrainUnpredictableBits(Unpredictable_RESDACR); if attrfield == '00' then fault = AArch32.DomainFault(domain, level, acctype, iswrite); else fault = AArch32.NoFault(); permissioncheck = (attrfield == '01'); return (permissioncheck, fault);

Library pseudocode for aarch32/translation/checks/AArch32.CheckPermission

// AArch32.CheckPermission() // ========================= // Function used for permission checking from AArch32 stage 1 translations FaultRecord AArch32.CheckPermission(Permissions perms, bits(32) vaddress, integer level, bits(4) domain, bit NS, AccType acctype, boolean iswrite) assert ELUsingAArch32(S1TranslationRegime()); if PSTATE.EL != EL2 then wxn = SCTLR.WXN == '1'; if TTBCR.EAE == '1' || SCTLR.AFE == '1' || perms.ap<0> == '1' then priv_r = TRUE; priv_w = perms.ap<2> == '0'; user_r = perms.ap<1> == '1'; user_w = perms.ap<2:1> == '01'; else priv_r = perms.ap<2:1> != '00'; priv_w = perms.ap<2:1> == '01'; user_r = perms.ap<1> == '1'; user_w = FALSE; uwxn = SCTLR.UWXN == '1'; ispriv = AArch32.AccessIsPrivileged(acctype); pan = if HavePANExt() then PSTATE.PAN else '0'; is_ldst = !(acctype IN {AccType_DC, AccType_DC_UNPRIV, AccType_AT, AccType_IFETCH}); is_ats1xp = (acctype == AccType_AT && AArch32.ExecutingATS1xPInstr()); if pan == '1' && user_r && ispriv && (is_ldst || is_ats1xp) then priv_r = FALSE; priv_w = FALSE; user_xn = !user_r || perms.xn == '1' || (user_w && wxn); priv_xn = (!priv_r || perms.xn == '1' || perms.pxn == '1' || (priv_w && wxn) || (user_w && uwxn)); if ispriv then (r, w, xn) = (priv_r, priv_w, priv_xn); else (r, w, xn) = (user_r, user_w, user_xn); else // Access from EL2 wxn = HSCTLR.WXN == '1'; r = TRUE; w = perms.ap<2> == '0'; xn = perms.xn == '1' || (w && wxn); // Restriction on Secure instruction fetch if HaveEL(EL3) && IsSecure() && NS == '1' then secure_instr_fetch = if ELUsingAArch32(EL3) then SCR.SIF else SCR_EL3.SIF; if secure_instr_fetch == '1' then xn = TRUE; if acctype == AccType_IFETCH then fail = xn; failedread = TRUE; elsif acctype IN { AccType_ATOMICRW, AccType_ORDEREDRW, AccType_ORDEREDATOMICRW } then fail = !r || !w; failedread = !r; elsif acctype == AccType_DC then // DC maintenance instructions operating by VA, cannot fault from stage 1 translation. fail = FALSE; elsif iswrite then fail = !w; failedread = FALSE; else fail = !r; failedread = TRUE; if fail then secondstage = FALSE; s2fs1walk = FALSE; ipaddress = bits(40) UNKNOWN; return AArch32.PermissionFault(ipaddress, domain, level, acctype, !failedread, secondstage, s2fs1walk); else return AArch32.NoFault();

Library pseudocode for aarch32/translation/checks/AArch32.CheckS2Permission

// AArch32.CheckS2Permission() // =========================== // Function used for permission checking from AArch32 stage 2 translations FaultRecord AArch32.CheckS2Permission(Permissions perms, bits(32) vaddress, bits(40) ipaddress, integer level, AccType acctype, boolean iswrite, boolean s2fs1walk) assert HaveEL(EL2) && !IsSecure() && ELUsingAArch32(EL2) && HasS2Translation(); r = perms.ap<1> == '1'; w = perms.ap<2> == '1'; if HaveExtendedExecuteNeverExt() then case perms.xn:perms.xxn of when '00' xn = !r; when '01' xn = !r || PSTATE.EL == EL1; when '10' xn = TRUE; when '11' xn = !r || PSTATE.EL == EL0; else xn = !r || perms.xn == '1'; // Stage 1 walk is checked as a read, regardless of the original type if acctype == AccType_IFETCH && !s2fs1walk then fail = xn; failedread = TRUE; elsif (acctype IN { AccType_ATOMICRW, AccType_ORDEREDRW, AccType_ORDEREDATOMICRW }) && !s2fs1walk then fail = !r || !w; failedread = !r; elsif acctype == AccType_DC && !s2fs1walk then // DC maintenance instructions operating by VA, do not generate Permission faults // from stage 2 translation, other than from stage 1 translation table walk. fail = FALSE; elsif iswrite && !s2fs1walk then fail = !w; failedread = FALSE; else fail = !r; failedread = !iswrite; if fail then domain = bits(4) UNKNOWN; secondstage = TRUE; return AArch32.PermissionFault(ipaddress, domain, level, acctype, !failedread, secondstage, s2fs1walk); else return AArch32.NoFault();

Library pseudocode for aarch32/translation/debug/AArch32.CheckBreakpoint

// AArch32.CheckBreakpoint() // ========================= // Called before executing the instruction of length "size" bytes at "vaddress" in an AArch32 // translation regime. // The breakpoint can in fact be evaluated well ahead of execution, for example, at instruction // fetch. This is the simple sequential execution of the program. FaultRecord AArch32.CheckBreakpoint(bits(32) vaddress, integer size) assert ELUsingAArch32(S1TranslationRegime()); assert size IN {2,4}; match = FALSE; mismatch = FALSE; for i = 0 to UInt(DBGDIDR.BRPs) (match_i, mismatch_i) = AArch32.BreakpointMatch(i, vaddress, size); match = match || match_i; mismatch = mismatch || mismatch_i; if match && HaltOnBreakpointOrWatchpoint() then reason = DebugHalt_Breakpoint; Halt(reason); elsif (match || mismatch) && DBGDSCRext.MDBGen == '1' && AArch32.GenerateDebugExceptions() then acctype = AccType_IFETCH; iswrite = FALSE; debugmoe = DebugException_Breakpoint; return AArch32.DebugFault(acctype, iswrite, debugmoe); else return AArch32.NoFault();

Library pseudocode for aarch32/translation/debug/AArch32.CheckDebug

// AArch32.CheckDebug() // ==================== // Called on each access to check for a debug exception or entry to Debug state. FaultRecord AArch32.CheckDebug(bits(32) vaddress, AccType acctype, boolean iswrite, integer size) FaultRecord fault = AArch32.NoFault(); d_side = (acctype != AccType_IFETCH); generate_exception = AArch32.GenerateDebugExceptions() && DBGDSCRext.MDBGen == '1'; halt = HaltOnBreakpointOrWatchpoint(); // Relative priority of Vector Catch and Breakpoint exceptions not defined in the architecture vector_catch_first = ConstrainUnpredictableBool(Unpredictable_BPVECTORCATCHPRI); if !d_side && vector_catch_first && generate_exception then fault = AArch32.CheckVectorCatch(vaddress, size); if fault.type == Fault_None && (generate_exception || halt) then if d_side then fault = AArch32.CheckWatchpoint(vaddress, acctype, iswrite, size); else fault = AArch32.CheckBreakpoint(vaddress, size); if fault.type == Fault_None && !d_side && !vector_catch_first && generate_exception then return AArch32.CheckVectorCatch(vaddress, size); return fault;

Library pseudocode for aarch32/translation/debug/AArch32.CheckVectorCatch

// AArch32.CheckVectorCatch() // ========================== // Called before executing the instruction of length "size" bytes at "vaddress" in an AArch32 // translation regime. // Vector Catch can in fact be evaluated well ahead of execution, for example, at instruction // fetch. This is the simple sequential execution of the program. FaultRecord AArch32.CheckVectorCatch(bits(32) vaddress, integer size) assert ELUsingAArch32(S1TranslationRegime()); match = AArch32.VCRMatch(vaddress); if size == 4 && !match && AArch32.VCRMatch(vaddress + 2) then match = ConstrainUnpredictableBool(Unpredictable_VCMATCHHALF); if match && DBGDSCRext.MDBGen == '1' && AArch32.GenerateDebugExceptions() then acctype = AccType_IFETCH; iswrite = FALSE; debugmoe = DebugException_VectorCatch; return AArch32.DebugFault(acctype, iswrite, debugmoe); else return AArch32.NoFault();

Library pseudocode for aarch32/translation/debug/AArch32.CheckWatchpoint

// AArch32.CheckWatchpoint() // ========================= // Called before accessing the memory location of "size" bytes at "address". FaultRecord AArch32.CheckWatchpoint(bits(32) vaddress, AccType acctype, boolean iswrite, integer size) assert ELUsingAArch32(S1TranslationRegime()); match = FALSE; ispriv = AArch32.AccessIsPrivileged(acctype); for i = 0 to UInt(DBGDIDR.WRPs) match = match || AArch32.WatchpointMatch(i, vaddress, size, ispriv, iswrite); if match && HaltOnBreakpointOrWatchpoint() then reason = DebugHalt_Watchpoint; Halt(reason); elsif match && DBGDSCRext.MDBGen == '1' && AArch32.GenerateDebugExceptions() then debugmoe = DebugException_Watchpoint; return AArch32.DebugFault(acctype, iswrite, debugmoe); else return AArch32.NoFault();

Library pseudocode for aarch32/translation/faults/AArch32.AccessFlagFault

// AArch32.AccessFlagFault() // ========================= FaultRecord AArch32.AccessFlagFault(bits(40) ipaddress, bits(4) domain, integer level, AccType acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) extflag = bit UNKNOWN; debugmoe = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; return AArch32.CreateFaultRecord(Fault_AccessFlag, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);

Library pseudocode for aarch32/translation/faults/AArch32.AddressSizeFault

// AArch32.AddressSizeFault() // ========================== FaultRecord AArch32.AddressSizeFault(bits(40) ipaddress, bits(4) domain, integer level, AccType acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) extflag = bit UNKNOWN; debugmoe = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; return AArch32.CreateFaultRecord(Fault_AddressSize, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);

Library pseudocode for aarch32/translation/faults/AArch32.AlignmentFault

// AArch32.AlignmentFault() // ======================== FaultRecord AArch32.AlignmentFault(AccType acctype, boolean iswrite, boolean secondstage) ipaddress = bits(40) UNKNOWN; domain = bits(4) UNKNOWN; level = integer UNKNOWN; extflag = bit UNKNOWN; debugmoe = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; s2fs1walk = boolean UNKNOWN; return AArch32.CreateFaultRecord(Fault_Alignment, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);

Library pseudocode for aarch32/translation/faults/AArch32.AsynchExternalAbort

// AArch32.AsynchExternalAbort() // ============================= // Wrapper function for asynchronous external aborts FaultRecord AArch32.AsynchExternalAbort(boolean parity, bits(2) errortype, bit extflag) type = if parity then Fault_AsyncParity else Fault_AsyncExternal; ipaddress = bits(40) UNKNOWN; domain = bits(4) UNKNOWN; level = integer UNKNOWN; acctype = AccType_NORMAL; iswrite = boolean UNKNOWN; debugmoe = bits(4) UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; return AArch32.CreateFaultRecord(type, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);

Library pseudocode for aarch32/translation/faults/AArch32.DebugFault

// AArch32.DebugFault() // ==================== FaultRecord AArch32.DebugFault(AccType acctype, boolean iswrite, bits(4) debugmoe) ipaddress = bits(40) UNKNOWN; domain = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; level = integer UNKNOWN; extflag = bit UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; return AArch32.CreateFaultRecord(Fault_Debug, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);

Library pseudocode for aarch32/translation/faults/AArch32.DomainFault

// AArch32.DomainFault() // ===================== FaultRecord AArch32.DomainFault(bits(4) domain, integer level, AccType acctype, boolean iswrite) ipaddress = bits(40) UNKNOWN; extflag = bit UNKNOWN; debugmoe = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; return AArch32.CreateFaultRecord(Fault_Domain, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);

Library pseudocode for aarch32/translation/faults/AArch32.NoFault

// AArch32.NoFault() // ================= FaultRecord AArch32.NoFault() ipaddress = bits(40) UNKNOWN; domain = bits(4) UNKNOWN; level = integer UNKNOWN; acctype = AccType_NORMAL; iswrite = boolean UNKNOWN; extflag = bit UNKNOWN; debugmoe = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; return AArch32.CreateFaultRecord(Fault_None, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);

Library pseudocode for aarch32/translation/faults/AArch32.PermissionFault

// AArch32.PermissionFault() // ========================= FaultRecord AArch32.PermissionFault(bits(40) ipaddress, bits(4) domain, integer level, AccType acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) extflag = bit UNKNOWN; debugmoe = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; return AArch32.CreateFaultRecord(Fault_Permission, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);

Library pseudocode for aarch32/translation/faults/AArch32.TranslationFault

// AArch32.TranslationFault() // ========================== FaultRecord AArch32.TranslationFault(bits(40) ipaddress, bits(4) domain, integer level, AccType acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk) extflag = bit UNKNOWN; debugmoe = bits(4) UNKNOWN; errortype = bits(2) UNKNOWN; return AArch32.CreateFaultRecord(Fault_Translation, ipaddress, domain, level, acctype, iswrite, extflag, debugmoe, errortype, secondstage, s2fs1walk);

Library pseudocode for aarch32/translation/translation/AArch32.FirstStageTranslate

// AArch32.FirstStageTranslate() // ============================= // Perform a stage 1 translation walk. The function used by Address Translation operations is // similar except it uses the translation regime specified for the instruction. AddressDescriptor AArch32.FirstStageTranslate(bits(32) vaddress, AccType acctype, boolean iswrite, boolean wasaligned, integer size) if PSTATE.EL == EL2 then s1_enabled = HSCTLR.M == '1'; elsif EL2Enabled() then tge = (if ELUsingAArch32(EL2) then HCR.TGE else HCR_EL2.TGE); dc = (if ELUsingAArch32(EL2) then HCR.DC else HCR_EL2.DC); s1_enabled = tge == '0' && dc == '0' && SCTLR.M == '1'; else s1_enabled = SCTLR.M == '1'; ipaddress = bits(40) UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; if s1_enabled then // First stage enabled use_long_descriptor_format = PSTATE.EL == EL2 || TTBCR.EAE == '1'; if use_long_descriptor_format then S1 = AArch32.TranslationTableWalkLD(ipaddress, vaddress, acctype, iswrite, secondstage, s2fs1walk, size); permissioncheck = TRUE; domaincheck = FALSE; else S1 = AArch32.TranslationTableWalkSD(vaddress, acctype, iswrite, size); permissioncheck = TRUE; domaincheck = TRUE; else S1 = AArch32.TranslateAddressS1Off(vaddress, acctype, iswrite); permissioncheck = FALSE; domaincheck = FALSE; if UsingAArch32() && HaveTrapLoadStoreMultipleDeviceExt() && AArch32.ExecutingLSMInstr() then if S1.addrdesc.memattrs.type == MemType_Device && S1.addrdesc.memattrs.device != DeviceType_GRE then nTLSMD = if S1TranslationRegime() == EL2 then HSCTLR.nTLSMD else SCTLR.nTLSMD; if nTLSMD == '0' then S1.addrdesc.fault = AArch32.AlignmentFault(acctype, iswrite, secondstage); // Check for unaligned data accesses to Device memory if ((!wasaligned && acctype != AccType_IFETCH) || (acctype == AccType_DCZVA)) && S1.addrdesc.memattrs.type == MemType_Device && !IsFault(S1.addrdesc) then S1.addrdesc.fault = AArch32.AlignmentFault(acctype, iswrite, secondstage); if !IsFault(S1.addrdesc) && domaincheck then (permissioncheck, abort) = AArch32.CheckDomain(S1.domain, vaddress, S1.level, acctype, iswrite); S1.addrdesc.fault = abort; if !IsFault(S1.addrdesc) && permissioncheck then S1.addrdesc.fault = AArch32.CheckPermission(S1.perms, vaddress, S1.level, S1.domain, S1.addrdesc.paddress.NS, acctype, iswrite); // Check for instruction fetches from Device memory not marked as execute-never. If there has // not been a Permission Fault then the memory is not marked execute-never. if (!IsFault(S1.addrdesc) && S1.addrdesc.memattrs.type == MemType_Device && acctype == AccType_IFETCH) then S1.addrdesc = AArch32.InstructionDevice(S1.addrdesc, vaddress, ipaddress, S1.level, S1.domain, acctype, iswrite, secondstage, s2fs1walk); return S1.addrdesc;

Library pseudocode for aarch32/translation/translation/AArch32.FullTranslate

// AArch32.FullTranslate() // ======================= // Perform both stage 1 and stage 2 translation walks for the current translation regime. The // function used by Address Translation operations is similar except it uses the translation // regime specified for the instruction. AddressDescriptor AArch32.FullTranslate(bits(32) vaddress, AccType acctype, boolean iswrite, boolean wasaligned, integer size) // First Stage Translation S1 = AArch32.FirstStageTranslate(vaddress, acctype, iswrite, wasaligned, size); if !IsFault(S1) && !(HaveNV2Ext() && acctype == AccType_NV2REGISTER) && HasS2Translation() then s2fs1walk = FALSE; result = AArch32.SecondStageTranslate(S1, vaddress, acctype, iswrite, wasaligned, s2fs1walk, size); else result = S1; return result;

Library pseudocode for aarch32/translation/translation/AArch32.SecondStageTranslate

// AArch32.SecondStageTranslate() // ============================== // Perform a stage 2 translation walk. The function used by Address Translation operations is // similar except it uses the translation regime specified for the instruction. AddressDescriptor AArch32.SecondStageTranslate(AddressDescriptor S1, bits(32) vaddress, AccType acctype, boolean iswrite, boolean wasaligned, boolean s2fs1walk, integer size) assert HasS2Translation(); assert IsZero(S1.paddress.address<47:40>); hwupdatewalk = FALSE; if !ELUsingAArch32(EL2) then return AArch64.SecondStageTranslate(S1, ZeroExtend(vaddress, 64), acctype, iswrite, wasaligned, s2fs1walk, size, hwupdatewalk); s2_enabled = HCR.VM == '1' || HCR.DC == '1'; secondstage = TRUE; if s2_enabled then // Second stage enabled ipaddress = S1.paddress.address<39:0>; S2 = AArch32.TranslationTableWalkLD(ipaddress, vaddress, acctype, iswrite, secondstage, s2fs1walk, size); // Check for unaligned data accesses to Device memory if ((!wasaligned && acctype != AccType_IFETCH) || (acctype == AccType_DCZVA)) && S2.addrdesc.memattrs.type == MemType_Device && !IsFault(S2.addrdesc) then S2.addrdesc.fault = AArch32.AlignmentFault(acctype, iswrite, secondstage); // Check for permissions on Stage2 translations if !IsFault(S2.addrdesc) then S2.addrdesc.fault = AArch32.CheckS2Permission(S2.perms, vaddress, ipaddress, S2.level, acctype, iswrite, s2fs1walk); // Check for instruction fetches from Device memory not marked as execute-never. As there // has not been a Permission Fault then the memory is not marked execute-never. if (!s2fs1walk && !IsFault(S2.addrdesc) && S2.addrdesc.memattrs.type == MemType_Device && acctype == AccType_IFETCH) then domain = bits(4) UNKNOWN; S2.addrdesc = AArch32.InstructionDevice(S2.addrdesc, vaddress, ipaddress, S2.level, domain, acctype, iswrite, secondstage, s2fs1walk); // Check for protected table walk if (s2fs1walk && !IsFault(S2.addrdesc) && HCR.PTW == '1' && S2.addrdesc.memattrs.type == MemType_Device) then domain = bits(4) UNKNOWN; S2.addrdesc.fault = AArch32.PermissionFault(ipaddress, domain, S2.level, acctype, iswrite, secondstage, s2fs1walk); result = CombineS1S2Desc(S1, S2.addrdesc); else result = S1; return result;

Library pseudocode for aarch32/translation/translation/AArch32.SecondStageWalk

// AArch32.SecondStageWalk() // ========================= // Perform a stage 2 translation on a stage 1 translation page table walk access. AddressDescriptor AArch32.SecondStageWalk(AddressDescriptor S1, bits(32) vaddress, AccType acctype, boolean iswrite, integer size) assert HasS2Translation(); s2fs1walk = TRUE; wasaligned = TRUE; return AArch32.SecondStageTranslate(S1, vaddress, acctype, iswrite, wasaligned, s2fs1walk, size);

Library pseudocode for aarch32/translation/translation/AArch32.TranslateAddress

// AArch32.TranslateAddress() // ========================== // Main entry point for translating an address AddressDescriptor AArch32.TranslateAddress(bits(32) vaddress, AccType acctype, boolean iswrite, boolean wasaligned, integer size) if !ELUsingAArch32(S1TranslationRegime()) then return AArch64.TranslateAddress(ZeroExtend(vaddress, 64), acctype, iswrite, wasaligned, size); result = AArch32.FullTranslate(vaddress, acctype, iswrite, wasaligned, size); if !(acctype IN {AccType_PTW, AccType_IC, AccType_AT}) && !IsFault(result) then result.fault = AArch32.CheckDebug(vaddress, acctype, iswrite, size); // Update virtual address for abort functions result.vaddress = ZeroExtend(vaddress); return result;

Library pseudocode for aarch32/translation/walk/AArch32.TranslationTableWalkLD

// AArch32.TranslationTableWalkLD() // ================================ // Returns a result of a translation table walk using the Long-descriptor format // // Implementations might cache information from memory in any number of non-coherent TLB // caching structures, and so avoid memory accesses that have been expressed in this // pseudocode. The use of such TLBs is not expressed in this pseudocode. TLBRecord AArch32.TranslationTableWalkLD(bits(40) ipaddress, bits(32) vaddress, AccType acctype, boolean iswrite, boolean secondstage, boolean s2fs1walk, integer size) if !secondstage then assert ELUsingAArch32(S1TranslationRegime()); else assert HaveEL(EL2) && !IsSecure() && ELUsingAArch32(EL2) && HasS2Translation(); TLBRecord result; AddressDescriptor descaddr; bits(64) baseregister; bits(40) inputaddr; // Input Address is 'vaddress' for stage 1, 'ipaddress' for stage 2 domain = bits(4) UNKNOWN; descaddr.memattrs.type = MemType_Normal; // Fixed parameters for the page table walk: // grainsize = Log2(Size of Table) - Size of Table is 4KB in AArch32 // stride = Log2(Address per Level) - Bits of address consumed at each level constant integer grainsize = 12; // Log2(4KB page size) constant integer stride = grainsize - 3; // Log2(page size / 8 bytes) // Derived parameters for the page table walk: // inputsize = Log2(Size of Input Address) - Input Address size in bits // level = Level to start walk from // This means that the number of levels after start level = 3-level if !secondstage then // First stage translation inputaddr = ZeroExtend(vaddress); el = AArch32.AccessUsesEL(acctype); if el == EL2 then inputsize = 32 - UInt(HTCR.T0SZ); basefound = inputsize == 32 || IsZero(inputaddr<31:inputsize>); disabled = FALSE; baseregister = HTTBR; descaddr.memattrs = WalkAttrDecode(HTCR.SH0, HTCR.ORGN0, HTCR.IRGN0, secondstage); reversedescriptors = HSCTLR.EE == '1'; lookupsecure = FALSE; singlepriv = TRUE; hierattrsdisabled = AArch32.HaveHPDExt() && HTCR.HPD == '1'; else basefound = FALSE; disabled = FALSE; t0size = UInt(TTBCR.T0SZ); if t0size == 0 || IsZero(inputaddr<31:(32-t0size)>) then inputsize = 32 - t0size; basefound = TRUE; baseregister = TTBR0; descaddr.memattrs = WalkAttrDecode(TTBCR.SH0, TTBCR.ORGN0, TTBCR.IRGN0, secondstage); hierattrsdisabled = AArch32.HaveHPDExt() && TTBCR.T2E == '1' && TTBCR2.HPD0 == '1'; t1size = UInt(TTBCR.T1SZ); if (t1size == 0 && !basefound) || (t1size > 0 && IsOnes(inputaddr<31:(32-t1size)>)) then inputsize = 32 - t1size; basefound = TRUE; baseregister = TTBR1; descaddr.memattrs = WalkAttrDecode(TTBCR.SH1, TTBCR.ORGN1, TTBCR.IRGN1, secondstage); hierattrsdisabled = AArch32.HaveHPDExt() && TTBCR.T2E == '1' && TTBCR2.HPD1 == '1'; reversedescriptors = SCTLR.EE == '1'; lookupsecure = IsSecure(); singlepriv = FALSE; // The starting level is the number of strides needed to consume the input address level = 4 - RoundUp(Real(inputsize - grainsize) / Real(stride)); else // Second stage translation inputaddr = ipaddress; inputsize = 32 - SInt(VTCR.T0SZ); // VTCR.S must match VTCR.T0SZ[3] if VTCR.S != VTCR.T0SZ<3> then (-, inputsize) = ConstrainUnpredictableInteger(32-7, 32+8, Unpredictable_RESVTCRS); basefound = inputsize == 40 || IsZero(inputaddr<39:inputsize>); disabled = FALSE; descaddr.memattrs = WalkAttrDecode(VTCR.IRGN0, VTCR.ORGN0, VTCR.SH0, secondstage); reversedescriptors = HSCTLR.EE == '1'; singlepriv = TRUE; lookupsecure = FALSE; baseregister = VTTBR; startlevel = UInt(VTCR.SL0); level = 2 - startlevel; if level <= 0 then basefound = FALSE; // Number of entries in the starting level table = // (Size of Input Address)/((Address per level)^(Num levels remaining)*(Size of Table)) startsizecheck = inputsize - ((3 - level)*stride + grainsize); // Log2(Num of entries) // Check for starting level table with fewer than 2 entries or longer than 16 pages. // Lower bound check is: startsizecheck < Log2(2 entries) // That is, VTCR.SL0 == '00' and SInt(VTCR.T0SZ) > 1, Size of Input Address < 2^31 bytes // Upper bound check is: startsizecheck > Log2(pagesize/8*16) // That is, VTCR.SL0 == '01' and SInt(VTCR.T0SZ) < -2, Size of Input Address > 2^34 bytes if startsizecheck < 1 || startsizecheck > stride + 4 then basefound = FALSE; if !basefound || disabled then level = 1; // AArch64 reports this as a level 0 fault result.addrdesc.fault = AArch32.TranslationFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; if !IsZero(baseregister<47:40>) then level = 0; result.addrdesc.fault = AArch32.AddressSizeFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Bottom bound of the Base address is: // Log2(8 bytes per entry)+Log2(Number of entries in starting level table) // Number of entries in starting level table = // (Size of Input Address)/((Address per level)^(Num levels remaining)*(Size of Table)) baselowerbound = 3 + inputsize - ((3-level)*stride + grainsize); // Log2(Num of entries*8) baseaddress = baseregister<39:baselowerbound>:Zeros(baselowerbound); ns_table = if lookupsecure then '0' else '1'; ap_table = '00'; xn_table = '0'; pxn_table = '0'; addrselecttop = inputsize - 1; repeat addrselectbottom = (3-level)*stride + grainsize; bits(40) index = ZeroExtend(inputaddr<addrselecttop:addrselectbottom>:'000'); descaddr.paddress.address = ZeroExtend(baseaddress OR index); descaddr.paddress.NS = ns_table; // If there are two stages of translation, then the first stage table walk addresses // are themselves subject to translation if secondstage || !HasS2Translation() || (HaveNV2Ext() && acctype == AccType_NV2REGISTER) then descaddr2 = descaddr; else descaddr2 = AArch32.SecondStageWalk(descaddr, vaddress, acctype, iswrite, 8); // Check for a fault on the stage 2 walk if IsFault(descaddr2) then result.addrdesc.fault = descaddr2.fault; return result; // Update virtual address for abort functions descaddr2.vaddress = ZeroExtend(vaddress); accdesc = CreateAccessDescriptorPTW(acctype, secondstage, s2fs1walk, level); desc = _Mem[descaddr2, 8, accdesc]; if reversedescriptors then desc = BigEndianReverse(desc); if desc<0> == '0' || (desc<1:0> == '01' && level == 3) then // Fault (00), Reserved (10), or Block (01) at level 3. result.addrdesc.fault = AArch32.TranslationFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Valid Block, Page, or Table entry if desc<1:0> == '01' || level == 3 then // Block (01) or Page (11) blocktranslate = TRUE; else // Table (11) if !IsZero(desc<47:40>) then result.addrdesc.fault = AArch32.AddressSizeFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; baseaddress = desc<39:grainsize>:Zeros(grainsize); if !secondstage then // Unpack the upper and lower table attributes ns_table = ns_table OR desc<63>; if !secondstage && !hierattrsdisabled then ap_table<1> = ap_table<1> OR desc<62>; // read-only xn_table = xn_table OR desc<60>; // pxn_table and ap_table[0] apply only in EL1&0 translation regimes if !singlepriv then pxn_table = pxn_table OR desc<59>; ap_table<0> = ap_table<0> OR desc<61>; // privileged level = level + 1; addrselecttop = addrselectbottom - 1; blocktranslate = FALSE; until blocktranslate; // Check the output address is inside the supported range if !IsZero(desc<47:40>) then result.addrdesc.fault = AArch32.AddressSizeFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Unpack the descriptor into address and upper and lower block attributes outputaddress = desc<39:addrselectbottom>:inputaddr<addrselectbottom-1:0>; // Check the access flag if desc<10> == '0' then result.addrdesc.fault = AArch32.AccessFlagFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; xn = desc<54>; // Bit[54] of the block/page descriptor holds UXN pxn = desc<53>; // Bit[53] of the block/page descriptor holds PXN ap = desc<7:6>:'1'; // Bits[7:6] of the block/page descriptor hold AP[2:1] contiguousbit = desc<52>; nG = desc<11>; sh = desc<9:8>; memattr = desc<5:2>; // AttrIndx and NS bit in stage 1 result.domain = bits(4) UNKNOWN; // Domains not used result.level = level; result.blocksize = 2^((3-level)*stride + grainsize); // Stage 1 translation regimes also inherit attributes from the tables if !secondstage then result.perms.xn = xn OR xn_table; result.perms.ap<2> = ap<2> OR ap_table<1>; // Force read-only // PXN, nG and AP[1] apply only in EL1&0 stage 1 translation regimes if !singlepriv then result.perms.ap<1> = ap<1> AND NOT(ap_table<0>); // Force privileged only result.perms.pxn = pxn OR pxn_table; // Pages from Non-secure tables are marked non-global in Secure EL1&0 if IsSecure() then result.nG = nG OR ns_table; else result.nG = nG; else result.perms.ap<1> = '1'; result.perms.pxn = '0'; result.nG = '0'; result.GP = desc<50>; // Stage 1 block or pages might be guarded result.perms.ap<0> = '1'; result.addrdesc.memattrs = AArch32.S1AttrDecode(sh, memattr<2:0>, acctype); result.addrdesc.paddress.NS = memattr<3> OR ns_table; else result.perms.ap<2:1> = ap<2:1>; result.perms.ap<0> = '1'; result.perms.xn = xn; if HaveExtendedExecuteNeverExt() then result.perms.xxn = desc<53>; result.perms.pxn = '0'; result.nG = '0'; if s2fs1walk then result.addrdesc.memattrs = S2AttrDecode(sh, memattr, AccType_PTW); else result.addrdesc.memattrs = S2AttrDecode(sh, memattr, acctype); result.addrdesc.paddress.NS = '1'; result.addrdesc.paddress.address = ZeroExtend(outputaddress); result.addrdesc.fault = AArch32.NoFault(); result.contiguous = contiguousbit == '1'; if HaveCommonNotPrivateTransExt() then result.CnP = baseregister<0>; return result;

Library pseudocode for aarch32/translation/walk/AArch32.TranslationTableWalkSD

// AArch32.TranslationTableWalkSD() // ================================ // Returns a result of a translation table walk using the Short-descriptor format // // Implementations might cache information from memory in any number of non-coherent TLB // caching structures, and so avoid memory accesses that have been expressed in this // pseudocode. The use of such TLBs is not expressed in this pseudocode. TLBRecord AArch32.TranslationTableWalkSD(bits(32) vaddress, AccType acctype, boolean iswrite, integer size) assert ELUsingAArch32(S1TranslationRegime()); // This is only called when address translation is enabled TLBRecord result; AddressDescriptor l1descaddr; AddressDescriptor l2descaddr; bits(40) outputaddress; // Variables for Abort functions ipaddress = bits(40) UNKNOWN; secondstage = FALSE; s2fs1walk = FALSE; NS = bit UNKNOWN; // Default setting of the domain domain = bits(4) UNKNOWN; // Determine correct Translation Table Base Register to use. bits(64) ttbr; n = UInt(TTBCR.N); if n == 0 || IsZero(vaddress<31:(32-n)>) then ttbr = TTBR0; disabled = (TTBCR.PD0 == '1'); else ttbr = TTBR1; disabled = (TTBCR.PD1 == '1'); n = 0; // TTBR1 translation always works like N=0 TTBR0 translation // Check this Translation Table Base Register is not disabled. if disabled then level = 1; result.addrdesc.fault = AArch32.TranslationFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; // Obtain descriptor from initial lookup. l1descaddr.paddress.address = ZeroExtend(ttbr<31:14-n>:vaddress<31-n:20>:'00'); l1descaddr.paddress.NS = if IsSecure() then '0' else '1'; IRGN = ttbr<0>:ttbr<6>; // TTBR.IRGN RGN = ttbr<4:3>; // TTBR.RGN SH = ttbr<1>:ttbr<5>; // TTBR.S:TTBR.NOS l1descaddr.memattrs = WalkAttrDecode(SH, RGN, IRGN, secondstage); if !HaveEL(EL2) || (IsSecure() && !IsSecureEL2Enabled()) then // if only 1 stage of translation l1descaddr2 = l1descaddr; else l1descaddr2 = AArch32.SecondStageWalk(l1descaddr, vaddress, acctype, iswrite, 4); // Check for a fault on the stage 2 walk if IsFault(l1descaddr2) then result.addrdesc.fault = l1descaddr2.fault; return result; // Update virtual address for abort functions l1descaddr2.vaddress = ZeroExtend(vaddress); accdesc = CreateAccessDescriptorPTW(acctype, secondstage, s2fs1walk, level); l1desc = _Mem[l1descaddr2, 4,accdesc]; if SCTLR.EE == '1' then l1desc = BigEndianReverse(l1desc); // Process descriptor from initial lookup. case l1desc<1:0> of when '00' // Fault, Reserved level = 1; result.addrdesc.fault = AArch32.TranslationFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; when '01' // Large page or Small page domain = l1desc<8:5>; level = 2; pxn = l1desc<2>; NS = l1desc<3>; // Obtain descriptor from level 2 lookup. l2descaddr.paddress.address = ZeroExtend(l1desc<31:10>:vaddress<19:12>:'00'); l2descaddr.paddress.NS = if IsSecure() then '0' else '1'; l2descaddr.memattrs = l1descaddr.memattrs; if !HaveEL(EL2) || (IsSecure() && !IsSecureEL2Enabled()) then // if only 1 stage of translation l2descaddr2 = l2descaddr; else l2descaddr2 = AArch32.SecondStageWalk(l2descaddr, vaddress, acctype, iswrite, 4); // Check for a fault on the stage 2 walk if IsFault(l2descaddr2) then result.addrdesc.fault = l2descaddr2.fault; return result; // Update virtual address for abort functions l2descaddr2.vaddress = ZeroExtend(vaddress); accdesc = CreateAccessDescriptorPTW(acctype, secondstage, s2fs1walk, level); l2desc = _Mem[l2descaddr2, 4, accdesc]; if SCTLR.EE == '1' then l2desc = BigEndianReverse(l2desc); // Process descriptor from level 2 lookup. if l2desc<1:0> == '00' then result.addrdesc.fault = AArch32.TranslationFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; nG = l2desc<11>; S = l2desc<10>; ap = l2desc<9,5:4>; if SCTLR.AFE == '1' && l2desc<4> == '0' then // ARMv8 VMSAv8-32 does not support hardware management of the Access flag. result.addrdesc.fault = AArch32.AccessFlagFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; if l2desc<1> == '0' then // Large page xn = l2desc<15>; tex = l2desc<14:12>; c = l2desc<3>; b = l2desc<2>; blocksize = 64; outputaddress = ZeroExtend(l2desc<31:16>:vaddress<15:0>); else // Small page tex = l2desc<8:6>; c = l2desc<3>; b = l2desc<2>; xn = l2desc<0>; blocksize = 4; outputaddress = ZeroExtend(l2desc<31:12>:vaddress<11:0>); when '1x' // Section or Supersection NS = l1desc<19>; nG = l1desc<17>; S = l1desc<16>; ap = l1desc<15,11:10>; tex = l1desc<14:12>; xn = l1desc<4>; c = l1desc<3>; b = l1desc<2>; pxn = l1desc<0>; level = 1; if SCTLR.AFE == '1' && l1desc<10> == '0' then // ARMv8 VMSAv8-32 does not support hardware management of the Access flag. result.addrdesc.fault = AArch32.AccessFlagFault(ipaddress, domain, level, acctype, iswrite, secondstage, s2fs1walk); return result; if l1desc<18> == '0' then // Section domain = l1desc<8:5>; blocksize = 1024; outputaddress = ZeroExtend(l1desc<31:20>:vaddress<19:0>); else // Supersection domain = '0000'; blocksize = 16384; outputaddress = l1desc<8:5>:l1desc<23:20>:l1desc<31:24>:vaddress<23:0>; // Decode the TEX, C, B and S bits to produce the TLBRecord's memory attributes if SCTLR.TRE == '0' then if RemapRegsHaveResetValues() then result.addrdesc.memattrs = AArch32.DefaultTEXDecode(tex, c, b, S, acctype); else result.addrdesc.memattrs = MemoryAttributes IMPLEMENTATION_DEFINED; else result.addrdesc.memattrs = AArch32.RemappedTEXDecode(tex, c, b, S, acctype); // Set the rest of the TLBRecord, try to add it to the TLB, and return it. result.perms.ap = ap; result.perms.xn = xn; result.perms.pxn = pxn; result.nG = nG; result.domain = domain; result.level = level; result.blocksize = blocksize; result.addrdesc.paddress.address = ZeroExtend(outputaddress); result.addrdesc.paddress.NS = if IsSecure() then NS else '1'; result.addrdesc.fault = AArch32.NoFault(); return result;

Library pseudocode for aarch32/translation/walk/RemapRegsHaveResetValues

boolean RemapRegsHaveResetValues();

Library pseudocode for aarch64/debug/breakpoint/AArch64.BreakpointMatch

// AArch64.BreakpointMatch() // ========================= // Breakpoint matching in an AArch64 translation regime. boolean AArch64.BreakpointMatch(integer n, bits(64) vaddress, AccType acctype, integer size) assert !ELUsingAArch32(S1TranslationRegime()); assert n <= UInt(ID_AA64DFR0_EL1.BRPs); enabled = DBGBCR_EL1[n].E == '1'; ispriv = PSTATE.EL != EL0; linked = DBGBCR_EL1[n].BT == '0x01'; isbreakpnt = TRUE; linked_to = FALSE; state_match = AArch64.StateMatch(DBGBCR_EL1[n].SSC, DBGBCR_EL1[n].HMC, DBGBCR_EL1[n].PMC, linked, DBGBCR_EL1[n].LBN, isbreakpnt, acctype, ispriv); value_match = AArch64.BreakpointValueMatch(n, vaddress, linked_to); if HaveAnyAArch32() && size == 4 then // Check second halfword // If the breakpoint address and BAS of an Address breakpoint match the address of the // second halfword of an instruction, but not the address of the first halfword, it is // CONSTRAINED UNPREDICTABLE whether or not this breakpoint generates a Breakpoint debug // event. match_i = AArch64.BreakpointValueMatch(n, vaddress + 2, linked_to); if !value_match && match_i then value_match = ConstrainUnpredictableBool(Unpredictable_BPMATCHHALF); if vaddress<1> == '1' && DBGBCR_EL1[n].BAS == '1111' then // The above notwithstanding, if DBGBCR_EL1[n].BAS == '1111', then it is CONSTRAINED // UNPREDICTABLE whether or not a Breakpoint debug event is generated for an instruction // at the address DBGBVR_EL1[n]+2. if value_match then value_match = ConstrainUnpredictableBool(Unpredictable_BPMATCHHALF); match = value_match && state_match && enabled; return match;

Library pseudocode for aarch64/debug/breakpoint/AArch64.BreakpointValueMatch

// AArch64.BreakpointValueMatch() // ============================== boolean AArch64.BreakpointValueMatch(integer n, bits(64) vaddress, boolean linked_to) // "n" is the identity of the breakpoint unit to match against. // "vaddress" is the current instruction address, ignored if linked_to is TRUE and for Context // matching breakpoints. // "linked_to" is TRUE if this is a call from StateMatch for linking. // If a non-existent breakpoint then it is CONSTRAINED UNPREDICTABLE whether this gives // no match or the breakpoint is mapped to another UNKNOWN implemented breakpoint. if n > UInt(ID_AA64DFR0_EL1.BRPs) then (c, n) = ConstrainUnpredictableInteger(0, UInt(ID_AA64DFR0_EL1.BRPs), Unpredictable_BPNOTIMPL); assert c IN {Constraint_DISABLED, Constraint_UNKNOWN}; if c == Constraint_DISABLED then return FALSE; // If this breakpoint is not enabled, it cannot generate a match. (This could also happen on a // call from StateMatch for linking). if DBGBCR_EL1[n].E == '0' then return FALSE; context_aware = (n >= UInt(ID_AA64DFR0_EL1.BRPs) - UInt(ID_AA64DFR0_EL1.CTX_CMPs)); // If BT is set to a reserved type, behaves either as disabled or as a not-reserved type. type = DBGBCR_EL1[n].BT; if ((type IN {'011x','11xx'} && !HaveVirtHostExt()) || // Context matching type == '010x' || // Reserved (type != '0x0x' && !context_aware) || // Context matching (type == '1xxx' && !HaveEL(EL2))) then // EL2 extension (c, type) = ConstrainUnpredictableBits(Unpredictable_RESBPTYPE); assert c IN {Constraint_DISABLED, Constraint_UNKNOWN}; if c == Constraint_DISABLED then return FALSE; // Otherwise the value returned by ConstrainUnpredictableBits must be a not-reserved value // Determine what to compare against. match_addr = (type == '0x0x'); match_vmid = (type == '10xx'); match_cid = (type == '001x'); match_cid1 = (type IN { '101x', 'x11x'}); match_cid2 = (type == '11xx'); linked = (type == 'xxx1'); // If this is a call from StateMatch, return FALSE if the breakpoint is not programmed for a // VMID and/or context ID match, of if not context-aware. The above assertions mean that the // code can just test for match_addr == TRUE to confirm all these things. if linked_to && (!linked || match_addr) then return FALSE; // If called from BreakpointMatch return FALSE for Linked context ID and/or VMID matches. if !linked_to && linked && !match_addr then return FALSE; // Do the comparison. if match_addr then byte = UInt(vaddress<1:0>); if HaveAnyAArch32() then // T32 instructions can be executed at EL0 in an AArch64 translation regime. assert byte IN {0,2}; // "vaddress" is halfword aligned byte_select_match = (DBGBCR_EL1[n].BAS<byte> == '1'); else assert byte == 0; // "vaddress" is word aligned byte_select_match = TRUE; // DBGBCR_EL1[n].BAS<byte> is RES1 top = AddrTop(vaddress, TRUE, PSTATE.EL); BVR_match = vaddress<top:2> == DBGBVR_EL1[n]<top:2> && byte_select_match; elsif match_cid then if IsInHost() then BVR_match = (CONTEXTIDR_EL2 == DBGBVR_EL1[n]<31:0>); else BVR_match = (PSTATE.EL IN {EL0,EL1} && CONTEXTIDR_EL1 == DBGBVR_EL1[n]<31:0>); elsif match_cid1 then BVR_match = (PSTATE.EL IN {EL0,EL1} && !IsInHost() && CONTEXTIDR_EL1 == DBGBVR_EL1[n]<31:0>); if match_vmid then if !Have16bitVMID() || VTCR_EL2.VS == '0' then vmid = ZeroExtend(VTTBR_EL2.VMID<7:0>, 16); bvr_vmid = ZeroExtend(DBGBVR_EL1[n]<39:32>, 16); else vmid = VTTBR_EL2.VMID; bvr_vmid = DBGBVR_EL1[n]<47:32>; BXVR_match = (EL2Enabled() && PSTATE.EL IN {EL0,EL1} && !IsInHost() && vmid == bvr_vmid); elsif match_cid2 then BXVR_match = (!IsSecure() && HaveVirtHostExt() && DBGBVR_EL1[n]<63:32> == CONTEXTIDR_EL2); bvr_match_valid = (match_addr || match_cid || match_cid1); bxvr_match_valid = (match_vmid || match_cid2); match = (!bxvr_match_valid || BXVR_match) && (!bvr_match_valid || BVR_match); return match;

Library pseudocode for aarch64/debug/breakpoint/AArch64.StateMatch

// AArch64.StateMatch() // ==================== // Determine whether a breakpoint or watchpoint is enabled in the current mode and state. boolean AArch64.StateMatch(bits(2) SSC, bit HMC, bits(2) PxC, boolean linked, bits(4) LBN, boolean isbreakpnt, AccType acctype, boolean ispriv) // "SSC", "HMC", "PxC" are the control fields from the DBGBCR[n] or DBGWCR[n] register. // "linked" is TRUE if this is a linked breakpoint/watchpoint type. // "LBN" is the linked breakpoint number from the DBGBCR[n] or DBGWCR[n] register. // "isbreakpnt" is TRUE for breakpoints, FALSE for watchpoints. // "ispriv" is valid for watchpoints, and selects between privileged and unprivileged accesses. // If parameters are set to a reserved type, behaves as either disabled or a defined type if ((HMC:SSC:PxC) IN {'011xx','100x0','101x0','11010','11101','1111x'} || // Reserved (HMC == '0' && PxC == '00' && (!isbreakpnt || !HaveAArch32EL(EL1))) || // Usr/Svc/Sys (SSC IN {'01','10'} && !HaveEL(EL3)) || // No EL3 (HMC:SSC != '000' && HMC:SSC != '111' && !HaveEL(EL3) && !HaveEL(EL2)) || // No EL3/EL2 (HMC:SSC:PxC == '11100' && !HaveEL(EL2))) then // No EL2 (c, <HMC,SSC,PxC>) = ConstrainUnpredictableBits(Unpredictable_RESBPWPCTRL); assert c IN {Constraint_DISABLED, Constraint_UNKNOWN}; if c == Constraint_DISABLED then return FALSE; // Otherwise the value returned by ConstrainUnpredictableBits must be a not-reserved value EL3_match = HaveEL(EL3) && HMC == '1' && SSC<0> == '0'; EL2_match = HaveEL(EL2) && HMC == '1'; EL1_match = PxC<0> == '1'; EL0_match = PxC<1> == '1'; el = if HaveNV2Ext() && acctype == AccType_NV2REGISTER then EL2 else PSTATE.EL; if !ispriv && !isbreakpnt then priv_match = EL0_match; else case el of when EL3 priv_match = EL3_match; when EL2 priv_match = EL2_match; when EL1 priv_match = EL1_match; when EL0 priv_match = EL0_match; case SSC of when '00' security_state_match = TRUE; // Both when '01' security_state_match = !IsSecure(); // Non-secure only when '10' security_state_match = IsSecure(); // Secure only when '11' security_state_match = TRUE; // Both if linked then // "LBN" must be an enabled context-aware breakpoint unit. If it is not context-aware then // it is CONSTRAINED UNPREDICTABLE whether this gives no match, or LBN is mapped to some // UNKNOWN breakpoint that is context-aware. lbn = UInt(LBN); first_ctx_cmp = (UInt(ID_AA64DFR0_EL1.BRPs) - UInt(ID_AA64DFR0_EL1.CTX_CMPs)); last_ctx_cmp = UInt(ID_AA64DFR0_EL1.BRPs); if (lbn < first_ctx_cmp || lbn > last_ctx_cmp) then (c, lbn) = ConstrainUnpredictableInteger(first_ctx_cmp, last_ctx_cmp, Unpredictable_BPNOTCTXCMP); assert c IN {Constraint_DISABLED, Constraint_NONE, Constraint_UNKNOWN}; case c of when Constraint_DISABLED return FALSE; // Disabled when Constraint_NONE linked = FALSE; // No linking // Otherwise ConstrainUnpredictableInteger returned a context-aware breakpoint if linked then vaddress = bits(64) UNKNOWN; linked_to = TRUE; linked_match = AArch64.BreakpointValueMatch(lbn, vaddress, linked_to); return priv_match && security_state_match && (!linked || linked_match);

Library pseudocode for aarch64/debug/enables/AArch64.GenerateDebugExceptions

// AArch64.GenerateDebugExceptions() // ================================= boolean AArch64.GenerateDebugExceptions() return AArch64.GenerateDebugExceptionsFrom(PSTATE.EL, IsSecure(), PSTATE.D);

Library pseudocode for aarch64/debug/enables/AArch64.GenerateDebugExceptionsFrom

// AArch64.GenerateDebugExceptionsFrom() // ===================================== boolean AArch64.GenerateDebugExceptionsFrom(bits(2) from, boolean secure, bit mask) if OSLSR_EL1.OSLK == '1' || DoubleLockStatus() || Halted() then return FALSE; route_to_el2 = HaveEL(EL2) && !secure && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1'); target = (if route_to_el2 then EL2 else EL1); enabled = !HaveEL(EL3) || !secure || MDCR_EL3.SDD == '0'; if from == target then enabled = enabled && MDSCR_EL1.KDE == '1' && mask == '0'; else enabled = enabled && UInt(target) > UInt(from); return enabled;

Library pseudocode for aarch64/debug/pmu/AArch64.CheckForPMUOverflow

// AArch64.CheckForPMUOverflow() // ============================= // Signal Performance Monitors overflow IRQ and CTI overflow events boolean AArch64.CheckForPMUOverflow() pmuirq = PMCR_EL0.E == '1' && PMINTENSET_EL1<31> == '1' && PMOVSSET_EL0<31> == '1'; for n = 0 to UInt(PMCR_EL0.N) - 1 if HaveEL(EL2) then E = (if n < UInt(MDCR_EL2.HPMN) then PMCR_EL0.E else MDCR_EL2.HPME); else E = PMCR_EL0.E; if E == '1' && PMINTENSET_EL1<n> == '1' && PMOVSSET_EL0<n> == '1' then pmuirq = TRUE; SetInterruptRequestLevel(InterruptID_PMUIRQ, if pmuirq then HIGH else LOW); CTI_SetEventLevel(CrossTriggerIn_PMUOverflow, if pmuirq then HIGH else LOW); // The request remains set until the condition is cleared. (For example, an interrupt handler // or cross-triggered event handler clears the overflow status flag by writing to PMOVSCLR_EL0.) return pmuirq;

Library pseudocode for aarch64/debug/pmu/AArch64.CountEvents

// AArch64.CountEvents() // ===================== // Return TRUE if counter "n" should count its event. For the cycle counter, n == 31. boolean AArch64.CountEvents(integer n) assert n == 31 || n < UInt(PMCR_EL0.N); // Event counting is disabled in Debug state debug = Halted(); // In Non-secure state, some counters are reserved for EL2 if HaveEL(EL2) then E = if n < UInt(MDCR_EL2.HPMN) || n == 31 then PMCR_EL0.E else MDCR_EL2.HPME; else E = PMCR_EL0.E; enabled = E == '1' && PMCNTENSET_EL0<n> == '1'; if !IsSecure() then // Event counting in Non-secure state is allowed unless all of: // * EL2 and the HPMD Extension are implemented // * Executing at EL2 // * PMNx is not reserved for EL2 // * MDCR_EL2.HPMD == 1 if HaveHPMDExt() && PSTATE.EL == EL2 && (n < UInt(MDCR_EL2.HPMN) || n == 31) then prohibited = (MDCR_EL2.HPMD == '1'); else prohibited = FALSE; else // Event counting in Secure state is prohibited unless any one of: // * EL3 is not implemented // * EL3 is using AArch64 and MDCR_EL3.SPME == 1 prohibited = HaveEL(EL3) && MDCR_EL3.SPME == '0'; // The IMPLEMENTATION DEFINED authentication interface might override software controls if prohibited && !HaveNoSecurePMUDisableOverride() then prohibited = !ExternalSecureNoninvasiveDebugEnabled(); // For the cycle counter, PMCR_EL0.DP enables counting when otherwise prohibited if prohibited && n == 31 then prohibited = (PMCR_EL0.DP == '1'); // Event counting can be filtered by the {P, U, NSK, NSU, NSH, M} bits filter = if n == 31 then PMCCFILTR else PMEVTYPER[n]; P = filter<31>; U = filter<30>; NSK = if HaveEL(EL3) then filter<29> else '0'; NSU = if HaveEL(EL3) then filter<28> else '0'; NSH = if HaveEL(EL2) then filter<27> else '0'; M = if HaveEL(EL3) then filter<26> else '0'; case PSTATE.EL of when EL0 filtered = if IsSecure() then U == '1' else U != NSU; when EL1 filtered = if IsSecure() then P == '1' else P != NSK; when EL2 filtered = (NSH == '0'); when EL3 filtered = (M != P); return !debug && enabled && !prohibited && !filtered;

Library pseudocode for aarch64/debug/statisticalprofiling/CheckProfilingBufferAccess

// CheckProfilingBufferAccess() // ============================ SysRegAccess CheckProfilingBufferAccess() if !HaveStatisticalProfiling() || PSTATE.EL == EL0 || UsingAArch32() then return SysRegAccess_UNDEFINED; if EL2Enabled() && PSTATE.EL == EL1 && MDCR_EL2.E2PB<0> != '1' then return SysRegAccess_TrapToEL2; if HaveEL(EL3) && PSTATE.EL != EL3 && MDCR_EL3.NSPB != SCR_EL3.NS:'1' then return SysRegAccess_TrapToEL3; return SysRegAccess_OK;

Library pseudocode for aarch64/debug/statisticalprofiling/CheckStatisticalProfilingAccess

// CheckStatisticalProfilingAccess() // ================================= SysRegAccess CheckStatisticalProfilingAccess() if !HaveStatisticalProfiling() || PSTATE.EL == EL0 || UsingAArch32() then return SysRegAccess_UNDEFINED; if EL2Enabled() && PSTATE.EL == EL1 && MDCR_EL2.TPMS == '1' then return SysRegAccess_TrapToEL2; if HaveEL(EL3) && PSTATE.EL != EL3 && MDCR_EL3.NSPB != SCR_EL3.NS:'1' then return SysRegAccess_TrapToEL3; return SysRegAccess_OK;

Library pseudocode for aarch64/debug/statisticalprofiling/CollectContextIDR1

// CollectContextIDR1() // ==================== boolean CollectContextIDR1() if !StatisticalProfilingEnabled() then return FALSE; if PSTATE.EL == EL2 then return FALSE; if EL2Enabled() && HCR_EL2.TGE == '1' then return FALSE; return PMSCR_EL1.CX == '1';

Library pseudocode for aarch64/debug/statisticalprofiling/CollectContextIDR2

// CollectContextIDR2() // ==================== boolean CollectContextIDR2() if !StatisticalProfilingEnabled() then return FALSE; if EL2Enabled() then return FALSE; return PMSCR_EL2.CX == '1';

Library pseudocode for aarch64/debug/statisticalprofiling/CollectPhysicalAddress

// CollectPhysicalAddress() // ======================== boolean CollectPhysicalAddress() if !StatisticalProfilingEnabled() then return FALSE; (secure, el) = ProfilingBufferOwner(); if !secure && HaveEL(EL2) then return PMSCR_EL2.PA == '1' && (el == EL2 || PMSCR_EL1.PA == '1'); else return PMSCR_EL1.PA == '1';

Library pseudocode for aarch64/debug/statisticalprofiling/CollectRecord

// CollectRecord() // =============== boolean CollectRecord(bits(64) events, integer total_latency, OpType optype) assert StatisticalProfilingEnabled(); if PMSFCR_EL1.FE == '1' then e = events<63:48,31:24,15:12,7,5,3,1>; m = PMSEVFR_EL1<63:48,31:24,15:12,7,5,3,1>; // Check for UNPREDICTABLE case if IsZero(PMSEVFR_EL1) && ConstrainUnpredictableBool(Unpredictable_ZEROPMSEVFR) then return FALSE; if !IsZero(NOT(e) AND m) then return FALSE; if PMSFCR_EL1.FT == '1' then // Check for UNPREDICTABLE case if IsZero(PMSFCR_EL1.<B,LD,ST>) && ConstrainUnpredictableBool(Unpredictable_NOOPTYPES) then return FALSE; case optype of when OpType_Branch if PMSFCR_EL1.B == '0' then return FALSE; when OpType_Load if PMSFCR_EL1.LD == '0' then return FALSE; when OpType_Store if PMSFCR_EL1.ST == '0' then return FALSE; when OpType_LoadAtomic if PMSFCR_EL1.<LD,ST> == '00' then return FALSE; otherwise return FALSE; if PMSFCR_EL1.FL == '1' then if IsZero(PMSLATFR_EL1.MINLAT) && ConstrainUnpredictableBool(Unpredictable_ZEROMINLATENCY) then // UNPREDICTABLE case return FALSE; if total_latency < UInt(PMSLATFR_EL1.MINLAT) then return FALSE; return TRUE;

Library pseudocode for aarch64/debug/statisticalprofiling/CollectTimeStamp

// CollectTimeStamp() // ================== TimeStamp CollectTimeStamp() if !StatisticalProfilingEnabled() then return TimeStamp_None; (secure, el) = ProfilingBufferOwner(); if el == EL2 then if PMSCR_EL2.TS == '0' then return TimeStamp_None; else if PMSCR_EL1.TS == '0' then return TimeStamp_None; if EL2Enabled() then pct = PMSCR_EL2.PCT == '1' && (el == EL2 || PMSCR_EL1.PCT == '1'); else pct = PMSCR_EL1.PCT == '1'; return (if pct then TimeStamp_Physical else TimeStamp_Virtual);

Library pseudocode for aarch64/debug/statisticalprofiling/OpType

enumeration OpType { OpType_Load, // Any memory-read operation other than atomics, compare-and-swap, and swap OpType_Store, // Any memory-write operation, including atomics without return OpType_LoadAtomic, // Atomics with return, compare-and-swap and swap OpType_Branch, // Software write to the PC OpType_Other // Any other class of operation };

Library pseudocode for aarch64/debug/statisticalprofiling/ProfilingBufferEnabled

// ProfilingBufferEnabled() // ======================== boolean ProfilingBufferEnabled() if !HaveStatisticalProfiling() then return FALSE; (secure, el) = ProfilingBufferOwner(); non_secure_bit = if secure then '0' else '1'; return (!ELUsingAArch32(el) && non_secure_bit == SCR_EL3.NS && PMBLIMITR_EL1.E == '1' && PMBSR_EL1.S == '0');

Library pseudocode for aarch64/debug/statisticalprofiling/ProfilingBufferOwner

// ProfilingBufferOwner() // ====================== (boolean, bits(2)) ProfilingBufferOwner() secure = if HaveEL(EL3) then (MDCR_EL3.NSPB<1> == '0') else IsSecure(); el = if !secure && HaveEL(EL2) && MDCR_EL2.E2PB == '00' then EL2 else EL1; return (secure, el);

Library pseudocode for aarch64/debug/statisticalprofiling/ProfilingSynchronizationBarrier

// Barrier to ensure that all existing profiling data has been formatted, and profiling buffer // addresses have been translated such that writes to the profiling buffer have been initiated. // A following DSB completes when writes to the profiling buffer have completed. ProfilingSynchronizationBarrier();

Library pseudocode for aarch64/debug/statisticalprofiling/StatisticalProfilingEnabled

// StatisticalProfilingEnabled() // ============================= boolean StatisticalProfilingEnabled() if !HaveStatisticalProfiling() || UsingAArch32() || !ProfilingBufferEnabled() then return FALSE; in_host = EL2Enabled() && HCR_EL2.TGE == '1'; (secure, el) = ProfilingBufferOwner(); if UInt(el) < UInt(PSTATE.EL) || secure != IsSecure() || (in_host && el == EL1) then return FALSE; case PSTATE.EL of when EL3 Unreachable(); when EL2 spe_bit = PMSCR_EL2.E2SPE; when EL1 spe_bit = PMSCR_EL1.E1SPE; when EL0 spe_bit = (if in_host then PMSCR_EL2.E0HSPE else PMSCR_EL1.E0SPE); return spe_bit == '1';

Library pseudocode for aarch64/debug/statisticalprofiling/SysRegAccess

enumeration SysRegAccess { SysRegAccess_OK, SysRegAccess_UNDEFINED, SysRegAccess_TrapToEL1, SysRegAccess_TrapToEL2, SysRegAccess_TrapToEL3 };

Library pseudocode for aarch64/debug/statisticalprofiling/TimeStamp

enumeration TimeStamp { TimeStamp_None, TimeStamp_Virtual, TimeStamp_Physical };

Library pseudocode for aarch64/debug/takeexceptiondbg/AArch64.TakeExceptionInDebugState

// AArch64.TakeExceptionInDebugState() // =================================== // Take an exception in Debug state to an Exception Level using AArch64. AArch64.TakeExceptionInDebugState(bits(2) target_el, ExceptionRecord exception) assert HaveEL(target_el) && !ELUsingAArch32(target_el) && UInt(target_el) >= UInt(PSTATE.EL); sync_errors = HaveIESB() && SCTLR[].IESB == '1'; if HaveDoubleFaultExt() then sync_errors = sync_errors || (SCR_EL3.EA == '1' && SCR_EL3.NMEA == '1' && PSTATE.EL == EL3); // SCTLR[].IESB might be ignored in Debug state. if !ConstrainUnpredictableBool(Unpredictable_IESBinDebug) then sync_errors = FALSE; if sync_errors && InsertIESBBeforeException(target_el) then SynchronizeErrors(); SynchronizeContext(); // If coming from AArch32 state, the top parts of the X[] registers might be set to zero from_32 = UsingAArch32(); if from_32 then AArch64.MaybeZeroRegisterUppers(); MaybeZeroSVEUppers(target_el); AArch64.ReportException(exception, target_el); PSTATE.EL = target_el; PSTATE.nRW = '0'; PSTATE.SP = '1'; SPSR[] = bits(32) UNKNOWN; ELR[] = bits(64) UNKNOWN; if HaveSSBSExt() then PSTATE.SSBS = bits(1) UNKNOWN; // PSTATE.{SS,D,A,I,F} are not observable and ignored in Debug state, so behave as if UNKNOWN. PSTATE.<SS,D,A,I,F> = bits(5) UNKNOWN; DLR_EL0 = bits(64) UNKNOWN; DSPSR_EL0 = bits(32) UNKNOWN; if HaveBTIExt() then if exception.type == Exception_Breakpoint then DSPSR_EL0<11:10> = PSTATE.BTYPE; else DSPSR_EL0<11:10> = if ConstrainUnpredictableBool(Unpredictable_ZEROBTYPE) then '00' else PSTATE.BTYPE; PSTATE.BTYPE = '00'; PSTATE.IL = '0'; if from_32 then // Coming from AArch32 PSTATE.IT = '00000000'; PSTATE.T = '0'; // PSTATE.J is RES0 if HavePANExt() && (PSTATE.EL == EL1 || (PSTATE.EL == EL2 && ELIsInHost(EL0))) && SCTLR[].SPAN == '0' then PSTATE.PAN = '1'; if HaveMTEExt() then PSTATE.TCO = '1'; EDSCR.ERR = '1'; UpdateEDSCRFields(); // Update EDSCR processor state flags. if sync_errors then SynchronizeErrors(); EndOfInstruction();

Library pseudocode for aarch64/debug/watchpoint/AArch64.WatchpointByteMatch

// AArch64.WatchpointByteMatch() // ============================= boolean AArch64.WatchpointByteMatch(integer n, AccType acctype, bits(64) vaddress) el = if HaveNV2Ext() && acctype == AccType_NV2REGISTER then EL2 else PSTATE.EL; top = AddrTop(vaddress, FALSE, el); bottom = if DBGWVR_EL1[n]<2> == '1' then 2 else 3; // Word or doubleword byte_select_match = (DBGWCR_EL1[n].BAS<UInt(vaddress<bottom-1:0>)> != '0'); mask = UInt(DBGWCR_EL1[n].MASK); // If DBGWCR_EL1[n].MASK is non-zero value and DBGWCR_EL1[n].BAS is not set to '11111111', or // DBGWCR_EL1[n].BAS specifies a non-contiguous set of bytes behavior is CONSTRAINED // UNPREDICTABLE. if mask > 0 && !IsOnes(DBGWCR_EL1[n].BAS) then byte_select_match = ConstrainUnpredictableBool(Unpredictable_WPMASKANDBAS); else LSB = (DBGWCR_EL1[n].BAS AND NOT(DBGWCR_EL1[n].BAS - 1)); MSB = (DBGWCR_EL1[n].BAS + LSB); if !IsZero(MSB AND (MSB - 1)) then // Not contiguous byte_select_match = ConstrainUnpredictableBool(Unpredictable_WPBASCONTIGUOUS); bottom = 3; // For the whole doubleword // If the address mask is set to a reserved value, the behavior is CONSTRAINED UNPREDICTABLE. if mask > 0 && mask <= 2 then (c, mask) = ConstrainUnpredictableInteger(3, 31, Unpredictable_RESWPMASK); assert c IN {Constraint_DISABLED, Constraint_NONE, Constraint_UNKNOWN}; case c of when Constraint_DISABLED return FALSE; // Disabled when Constraint_NONE mask = 0; // No masking // Otherwise the value returned by ConstrainUnpredictableInteger is a not-reserved value if mask > bottom then WVR_match = (vaddress<top:mask> == DBGWVR_EL1[n]<top:mask>); // If masked bits of DBGWVR_EL1[n] are not zero, the behavior is CONSTRAINED UNPREDICTABLE. if WVR_match && !IsZero(DBGWVR_EL1[n]<mask-1:bottom>) then WVR_match = ConstrainUnpredictableBool(Unpredictable_WPMASKEDBITS); else WVR_match = vaddress<top:bottom> == DBGWVR_EL1[n]<top:bottom>; return WVR_match && byte_select_match;

Library pseudocode for aarch64/debug/watchpoint/AArch64.WatchpointMatch

// AArch64.WatchpointMatch() // ========================= // Watchpoint matching in an AArch64 translation regime. boolean AArch64.WatchpointMatch(integer n, bits(64) vaddress, integer size, boolean ispriv, AccType acctype, boolean iswrite) assert !ELUsingAArch32(S1TranslationRegime()); assert n <= UInt(ID_AA64DFR0_EL1.WRPs); // "ispriv" is FALSE for LDTR/STTR instructions executed at EL1 and all // load/stores at EL0, TRUE for all other load/stores. "iswrite" is TRUE for stores, FALSE for // loads. enabled = DBGWCR_EL1[n].E == '1'; linked = DBGWCR_EL1[n].WT == '1'; isbreakpnt = FALSE; state_match = AArch64.StateMatch(DBGWCR_EL1[n].SSC, DBGWCR_EL1[n].HMC, DBGWCR_EL1[n].PAC, linked, DBGWCR_EL1[n].LBN, isbreakpnt, acctype, ispriv); ls_match = (DBGWCR_EL1[n].LSC<(if iswrite then 1 else 0)> == '1'); value_match = FALSE; for byte = 0 to size - 1 value_match = value_match || AArch64.WatchpointByteMatch(n, acctype, vaddress + byte); return value_match && state_match && ls_match && enabled;

Library pseudocode for aarch64/exceptions/aborts/AArch64.Abort

// AArch64.Abort() // =============== // Abort and Debug exception handling in an AArch64 translation regime. AArch64.Abort(bits(64) vaddress, FaultRecord fault) if IsDebugException(fault) then if fault.acctype == AccType_IFETCH then if UsingAArch32() && fault.debugmoe == DebugException_VectorCatch then AArch64.VectorCatchException(fault); else AArch64.BreakpointException(fault); else AArch64.WatchpointException(vaddress, fault); elsif fault.acctype == AccType_IFETCH then AArch64.InstructionAbort(vaddress, fault); else AArch64.DataAbort(vaddress, fault);

Library pseudocode for aarch64/exceptions/aborts/AArch64.AbortSyndrome

// AArch64.AbortSyndrome() // ======================= // Creates an exception syndrome record for Abort and Watchpoint exceptions // from an AArch64 translation regime. ExceptionRecord AArch64.AbortSyndrome(Exception type, FaultRecord fault, bits(64) vaddress) exception = ExceptionSyndrome(type); d_side = type IN {Exception_DataAbort, Exception_NV2DataAbort, Exception_Watchpoint}; exception.syndrome = AArch64.FaultSyndrome(d_side, fault); exception.vaddress = ZeroExtend(vaddress); if IPAValid(fault) then exception.ipavalid = TRUE; exception.NS = fault.ipaddress.NS; exception.ipaddress = fault.ipaddress.address; else exception.ipavalid = FALSE; return exception;

Library pseudocode for aarch64/exceptions/aborts/AArch64.CheckPCAlignment

// AArch64.CheckPCAlignment() // ========================== AArch64.CheckPCAlignment() bits(64) pc = ThisInstrAddr(); if pc<1:0> != '00' then AArch64.PCAlignmentFault();

Library pseudocode for aarch64/exceptions/aborts/AArch64.DataAbort

// AArch64.DataAbort() // =================== AArch64.DataAbort(bits(64) vaddress, FaultRecord fault) route_to_el3 = HaveEL(EL3) && SCR_EL3.EA == '1' && IsExternalAbort(fault); route_to_el2 = (EL2Enabled() && PSTATE.EL IN {EL0,EL1} && (HCR_EL2.TGE == '1' || (HaveRASExt() && HCR_EL2.TEA == '1' && IsExternalAbort(fault)) || (HaveNV2Ext() && fault.acctype == AccType_NV2REGISTER) || IsSecondStage(fault))); bits(64) preferred_exception_return = ThisInstrAddr(); if (HaveDoubleFaultExt() && (PSTATE.EL == EL3 || route_to_el3) && IsExternalAbort(fault) && SCR_EL3.EASE == '1') then vect_offset = 0x180; else vect_offset = 0x0; if HaveNV2Ext() && fault.acctype == AccType_NV2REGISTER then exception = AArch64.AbortSyndrome(Exception_NV2DataAbort, fault, vaddress); else exception = AArch64.AbortSyndrome(Exception_DataAbort, fault, vaddress); if PSTATE.EL == EL3 || route_to_el3 then AArch64.TakeException(EL3, exception, preferred_exception_return, vect_offset); elsif PSTATE.EL == EL2 || route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/aborts/AArch64.InstructionAbort

// AArch64.InstructionAbort() // ========================== AArch64.InstructionAbort(bits(64) vaddress, FaultRecord fault) // External aborts on instruction fetch must be taken synchronously if HaveDoubleFaultExt() then assert fault.type != Fault_AsyncExternal; route_to_el3 = HaveEL(EL3) && SCR_EL3.EA == '1' && IsExternalAbort(fault); route_to_el2 = (EL2Enabled() && PSTATE.EL IN {EL0,EL1} && (HCR_EL2.TGE == '1' || IsSecondStage(fault) || (HaveRASExt() && HCR_EL2.TEA == '1' && IsExternalAbort(fault)))); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = AArch64.AbortSyndrome(Exception_InstructionAbort, fault, vaddress); if PSTATE.EL == EL3 || route_to_el3 then AArch64.TakeException(EL3, exception, preferred_exception_return, vect_offset); elsif PSTATE.EL == EL2 || route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/aborts/AArch64.PCAlignmentFault

// AArch64.PCAlignmentFault() // ========================== // Called on unaligned program counter in AArch64 state. AArch64.PCAlignmentFault() bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_PCAlignment); exception.vaddress = ThisInstrAddr(); if UInt(PSTATE.EL) > UInt(EL1) then AArch64.TakeException(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif EL2Enabled() && HCR_EL2.TGE == '1' then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/aborts/AArch64.SPAlignmentFault

// AArch64.SPAlignmentFault() // ========================== // Called on an unaligned stack pointer in AArch64 state. AArch64.SPAlignmentFault() bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_SPAlignment); if UInt(PSTATE.EL) > UInt(EL1) then AArch64.TakeException(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif EL2Enabled() && HCR_EL2.TGE == '1' then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/aborts/BranchTargetException

// BranchTargetException // ===================== // Raise branch target exception. AArch64.BranchTargetException(bits(52) vaddress) route_to_el2 = EL2Enabled() && PSTATE.EL == EL0 && HCR_EL2.TGE == '1'; bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_BranchTarget); exception.syndrome<1:0> = PSTATE.BTYPE; exception.syndrome<24:2> = Zeros(); // RES0 if UInt(PSTATE.EL) > UInt(EL1) then AArch64.TakeException(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/aborts/EffectiveTCF

// EffectiveTCF() // ============== // Returns the TCF field applied to Tag Check Fails in the given Exception Level bits(2) EffectiveTCF(bits(2) el) if el == EL3 then tcf = SCTLR_EL3.TCF; elsif el == EL2 then tcf = SCTLR_EL2.TCF; elsif el == EL1 then tcf = SCTLR_EL1.TCF; elsif el == EL0 && HCR_EL2.<E2H,TGE> == '11' then tcf = SCTLR_EL2.TCF0; elsif el == EL0 && HCR_EL2.<E2H,TGE> != '11' then tcf = SCTLR_EL1.TCF0; return tcf;

Library pseudocode for aarch64/exceptions/aborts/RecordTagCheckFail

// RecordTagCheckFail() // ==================== // Records a tag fail exception into the appropriate TCFR_ELx ReportTagCheckFail(bits(2) el, bit ttbr) if el == EL3 then assert ttbr == '0'; TFSR_EL3.TF0 = '1'; elsif el == EL2 then if ttbr == '0' then TFSR_EL2.TF0 = '1'; else TFSR_EL2.TF1 = '1'; elsif el == EL1 then if ttbr == '0' then TFSR_EL1.TF0 = '1'; else TFSR_EL1.TF1 = '1'; elsif el == EL0 then if ttbr == '0' then TFSRE0_EL1.TF0 = '1'; else TFSRE0_EL1.TF1 = '1';

Library pseudocode for aarch64/exceptions/aborts/TagCheckFail

// TagCheckFail() // ============== // Handle a tag check fail condition TagCheckFail(bits(64) vaddress, boolean iswrite) bits(2) tcf = EffectiveTCF(PSTATE.EL); if tcf == '01' then TagCheckFault(vaddress, iswrite); elsif tcf == '10' then ReportTagCheckFail(PSTATE.EL, vaddress<55>);

Library pseudocode for aarch64/exceptions/aborts/TagCheckFault

// TagCheckFault() // =============== // Raise a tag check fail exception. TagCheckFault(bits(64) va, boolean write) bits(2) target_el; bits(64) preferred_exception_return = ThisInstrAddr(); integer vect_offset = 0x0; if PSTATE.EL == EL0 then target_el = if HCR_EL2.TGE == 0 then EL1 else EL2; else target_el = PSTATE.EL; exception = ExceptionSyndrome(Exception_DataAbort); exception.syndrome<5:0> = '010001'; if write then exception.syndrome<6> = '1'; exception.vaddress = va; AArch64.TakeException(target_el, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/asynch/AArch64.TakePhysicalFIQException

// AArch64.TakePhysicalFIQException() // ================================== AArch64.TakePhysicalFIQException() route_to_el3 = HaveEL(EL3) && SCR_EL3.FIQ == '1'; route_to_el2 = (EL2Enabled() && PSTATE.EL IN {EL0,EL1} && (HCR_EL2.TGE == '1' || HCR_EL2.FMO == '1')); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x100; exception = ExceptionSyndrome(Exception_FIQ); if route_to_el3 then AArch64.TakeException(EL3, exception, preferred_exception_return, vect_offset); elsif PSTATE.EL == EL2 || route_to_el2 then assert PSTATE.EL != EL3; AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else assert PSTATE.EL IN {EL0,EL1}; AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/asynch/AArch64.TakePhysicalIRQException

// AArch64.TakePhysicalIRQException() // ================================== // Take an enabled physical IRQ exception. AArch64.TakePhysicalIRQException() route_to_el3 = HaveEL(EL3) && SCR_EL3.IRQ == '1'; route_to_el2 = (EL2Enabled() && PSTATE.EL IN {EL0,EL1} && (HCR_EL2.TGE == '1' || HCR_EL2.IMO == '1')); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x80; exception = ExceptionSyndrome(Exception_IRQ); if route_to_el3 then AArch64.TakeException(EL3, exception, preferred_exception_return, vect_offset); elsif PSTATE.EL == EL2 || route_to_el2 then assert PSTATE.EL != EL3; AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else assert PSTATE.EL IN {EL0,EL1}; AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/asynch/AArch64.TakePhysicalSErrorException

// AArch64.TakePhysicalSErrorException() // ===================================== AArch64.TakePhysicalSErrorException(boolean impdef_syndrome, bits(24) syndrome) route_to_el3 = HaveEL(EL3) && SCR_EL3.EA == '1'; route_to_el2 = (EL2Enabled() && PSTATE.EL IN {EL0,EL1} && (HCR_EL2.TGE == '1' || (!IsInHost() && HCR_EL2.AMO == '1'))); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x180; exception = ExceptionSyndrome(Exception_SError); exception.syndrome<24> = if impdef_syndrome then '1' else '0'; exception.syndrome<23:0> = syndrome; ClearPendingPhysicalSError(); if PSTATE.EL == EL3 || route_to_el3 then AArch64.TakeException(EL3, exception, preferred_exception_return, vect_offset); elsif PSTATE.EL == EL2 || route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/asynch/AArch64.TakeVirtualFIQException

// AArch64.TakeVirtualFIQException() // ================================= AArch64.TakeVirtualFIQException() assert EL2Enabled() && PSTATE.EL IN {EL0,EL1}; assert HCR_EL2.TGE == '0' && HCR_EL2.FMO == '1'; // Virtual IRQ enabled if TGE==0 and FMO==1 bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x100; exception = ExceptionSyndrome(Exception_FIQ); AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/asynch/AArch64.TakeVirtualIRQException

// AArch64.TakeVirtualIRQException() // ================================= AArch64.TakeVirtualIRQException() assert EL2Enabled() && PSTATE.EL IN {EL0,EL1}; assert HCR_EL2.TGE == '0' && HCR_EL2.IMO == '1'; // Virtual IRQ enabled if TGE==0 and IMO==1 bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x80; exception = ExceptionSyndrome(Exception_IRQ); AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/asynch/AArch64.TakeVirtualSErrorException

// AArch64.TakeVirtualSErrorException() // ==================================== AArch64.TakeVirtualSErrorException(boolean impdef_syndrome, bits(24) syndrome) assert EL2Enabled() && PSTATE.EL IN {EL0,EL1}; assert HCR_EL2.TGE == '0' && HCR_EL2.AMO == '1'; // Virtual SError enabled if TGE==0 and AMO==1 bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x180; exception = ExceptionSyndrome(Exception_SError); if HaveRASExt() then exception.syndrome<24> = VSESR_EL2.IDS; exception.syndrome<23:0> = VSESR_EL2.ISS; else exception.syndrome<24> = if impdef_syndrome then '1' else '0'; if impdef_syndrome then exception.syndrome<23:0> = syndrome; ClearPendingVirtualSError(); AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/debug/AArch64.BreakpointException

// AArch64.BreakpointException() // ============================= AArch64.BreakpointException(FaultRecord fault) assert PSTATE.EL != EL3; route_to_el2 = (EL2Enabled() && PSTATE.EL IN {EL0,EL1} && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1')); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; vaddress = bits(64) UNKNOWN; exception = AArch64.AbortSyndrome(Exception_Breakpoint, fault, vaddress); if PSTATE.EL == EL2 || route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/debug/AArch64.SoftwareBreakpoint

// AArch64.SoftwareBreakpoint() // ============================ AArch64.SoftwareBreakpoint(bits(16) immediate) route_to_el2 = (EL2Enabled() && PSTATE.EL IN {EL0,EL1} && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1')); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_SoftwareBreakpoint); exception.syndrome<15:0> = immediate; if UInt(PSTATE.EL) > UInt(EL1) then AArch64.TakeException(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/debug/AArch64.SoftwareStepException

// AArch64.SoftwareStepException() // =============================== AArch64.SoftwareStepException() assert PSTATE.EL != EL3; route_to_el2 = (EL2Enabled() && PSTATE.EL IN {EL0,EL1} && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1')); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_SoftwareStep); if SoftwareStep_DidNotStep() then exception.syndrome<24> = '0'; else exception.syndrome<24> = '1'; exception.syndrome<6> = if SoftwareStep_SteppedEX() then '1' else '0'; if PSTATE.EL == EL2 || route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/debug/AArch64.VectorCatchException

// AArch64.VectorCatchException() // ============================== // Vector Catch taken from EL0 or EL1 to EL2. This can only be called when debug exceptions are // being routed to EL2, as Vector Catch is a legacy debug event. AArch64.VectorCatchException(FaultRecord fault) assert PSTATE.EL != EL2; assert EL2Enabled() && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1'); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; vaddress = bits(64) UNKNOWN; exception = AArch64.AbortSyndrome(Exception_VectorCatch, fault, vaddress); AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/debug/AArch64.WatchpointException

// AArch64.WatchpointException() // ============================= AArch64.WatchpointException(bits(64) vaddress, FaultRecord fault) assert PSTATE.EL != EL3; route_to_el2 = (EL2Enabled() && PSTATE.EL IN {EL0,EL1} && (HCR_EL2.TGE == '1' || MDCR_EL2.TDE == '1')); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = AArch64.AbortSyndrome(Exception_Watchpoint, fault, vaddress); if PSTATE.EL == EL2 || route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/exceptions/AArch64.ExceptionClass

// AArch64.ExceptionClass() // ======================== // Return the Exception Class and Instruction Length fields for reported in ESR (integer,bit) AArch64.ExceptionClass(Exception type, bits(2) target_el) il = if ThisInstrLength() == 32 then '1' else '0'; from_32 = UsingAArch32(); assert from_32 || il == '1'; // AArch64 instructions always 32-bit case type of when Exception_Uncategorized ec = 0x00; il = '1'; when Exception_WFxTrap ec = 0x01; when Exception_CP15RTTrap ec = 0x03; assert from_32; when Exception_CP15RRTTrap ec = 0x04; assert from_32; when Exception_CP14RTTrap ec = 0x05; assert from_32; when Exception_CP14DTTrap ec = 0x06; assert from_32; when Exception_AdvSIMDFPAccessTrap ec = 0x07; when Exception_FPIDTrap ec = 0x08; when Exception_PACTrap ec = 0x09; when Exception_CP14RRTTrap ec = 0x0C; assert from_32; when Exception_BranchTarget ec = 0x0D; when Exception_IllegalState ec = 0x0E; il = '1'; when Exception_SupervisorCall ec = 0x11; when Exception_HypervisorCall ec = 0x12; when Exception_MonitorCall ec = 0x13; when Exception_SystemRegisterTrap ec = 0x18; assert !from_32; when Exception_SVEAccessTrap ec = 0x19; assert !from_32; when Exception_ERetTrap ec = 0x1A; when Exception_InstructionAbort ec = 0x20; il = '1'; when Exception_PCAlignment ec = 0x22; il = '1'; when Exception_DataAbort ec = 0x24; when Exception_NV2DataAbort ec = 0x25; when Exception_SPAlignment ec = 0x26; il = '1'; assert !from_32; when Exception_FPTrappedException ec = 0x28; when Exception_SError ec = 0x2F; il = '1'; when Exception_Breakpoint ec = 0x30; il = '1'; when Exception_SoftwareStep ec = 0x32; il = '1'; when Exception_Watchpoint ec = 0x34; il = '1'; when Exception_SoftwareBreakpoint ec = 0x38; when Exception_VectorCatch ec = 0x3A; il = '1'; assert from_32; otherwise Unreachable(); if ec IN {0x20,0x24,0x30,0x32,0x34} && target_el == PSTATE.EL then ec = ec + 1; if ec IN {0x11,0x12,0x13,0x28,0x38} && !from_32 then ec = ec + 4; return (ec,il);

Library pseudocode for aarch64/exceptions/exceptions/AArch64.ReportException

// AArch64.ReportException() // ========================= // Report syndrome information for exception taken to AArch64 state. AArch64.ReportException(ExceptionRecord exception, bits(2) target_el) Exception type = exception.type; (ec,il) = AArch64.ExceptionClass(type, target_el); iss = exception.syndrome; // IL is not valid for Data Abort exceptions without valid instruction syndrome information if ec IN {0x24,0x25} && iss<24> == '0' then il = '1'; ESR[target_el] = ec<5:0>:il:iss; if type IN {Exception_InstructionAbort, Exception_PCAlignment, Exception_DataAbort, Exception_NV2DataAbort, Exception_Watchpoint} then FAR[target_el] = exception.vaddress; else FAR[target_el] = bits(64) UNKNOWN; if target_el == EL2 then if exception.ipavalid then HPFAR_EL2<43:4> = exception.ipaddress<51:12>; if HaveSecureEL2Ext() then if IsSecureEL2Enabled() then HPFAR_EL2.NS = exception.NS; else HPFAR_EL2.NS = '0'; else HPFAR_EL2<43:4> = bits(40) UNKNOWN; return;

Library pseudocode for aarch64/exceptions/exceptions/AArch64.ResetControlRegisters

// Resets System registers and memory-mapped control registers that have architecturally-defined // reset values to those values. AArch64.ResetControlRegisters(boolean cold_reset);

Library pseudocode for aarch64/exceptions/exceptions/AArch64.TakeReset

// AArch64.TakeReset() // =================== // Reset into AArch64 state AArch64.TakeReset(boolean cold_reset) assert !HighestELUsingAArch32(); // Enter the highest implemented Exception level in AArch64 state PSTATE.nRW = '0'; if HaveEL(EL3) then PSTATE.EL = EL3; elsif HaveEL(EL2) then PSTATE.EL = EL2; else PSTATE.EL = EL1; // Reset the system registers and other system components AArch64.ResetControlRegisters(cold_reset); // Reset all other PSTATE fields PSTATE.SP = '1'; // Select stack pointer PSTATE.<D,A,I,F> = '1111'; // All asynchronous exceptions masked PSTATE.SS = '0'; // Clear software step bit PSTATE.DIT = '0'; // PSTATE.DIT is reset to 0 when resetting into AArch64 PSTATE.IL = '0'; // Clear Illegal Execution state bit // All registers, bits and fields not reset by the above pseudocode or by the BranchTo() call // below are UNKNOWN bitstrings after reset. In particular, the return information registers // ELR_ELx and SPSR_ELx have UNKNOWN values, so that it // is impossible to return from a reset in an architecturally defined way. AArch64.ResetGeneralRegisters(); AArch64.ResetSIMDFPRegisters(); AArch64.ResetSpecialRegisters(); ResetExternalDebugRegisters(cold_reset); bits(64) rv; // IMPLEMENTATION DEFINED reset vector if HaveEL(EL3) then rv = RVBAR_EL3; elsif HaveEL(EL2) then rv = RVBAR_EL2; else rv = RVBAR_EL1; // The reset vector must be correctly aligned assert IsZero(rv<63:PAMax()>) && IsZero(rv<1:0>); BranchTo(rv, BranchType_RESET);

Library pseudocode for aarch64/exceptions/ieeefp/AArch64.FPTrappedException

// AArch64.FPTrappedException() // ============================ AArch64.FPTrappedException(boolean is_ase, integer element, bits(8) accumulated_exceptions) exception = ExceptionSyndrome(Exception_FPTrappedException); if is_ase then if boolean IMPLEMENTATION_DEFINED "vector instructions set TFV to 1" then exception.syndrome<23> = '1'; // TFV else exception.syndrome<23> = '0'; // TFV else exception.syndrome<23> = '1'; // TFV exception.syndrome<10:8> = bits(3) UNKNOWN; // VECITR if exception.syndrome<23> == '1' then exception.syndrome<7,4:0> = accumulated_exceptions<7,4:0>; // IDF,IXF,UFF,OFF,DZF,IOF else exception.syndrome<7,4:0> = bits(6) UNKNOWN; route_to_el2 = EL2Enabled() && HCR_EL2.TGE == '1'; bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; if UInt(PSTATE.EL) > UInt(EL1) then AArch64.TakeException(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/syscalls/AArch64.CallHypervisor

// AArch64.CallHypervisor() // ======================== // Performs a HVC call AArch64.CallHypervisor(bits(16) immediate) assert HaveEL(EL2); if UsingAArch32() then AArch32.ITAdvance(); SSAdvance(); bits(64) preferred_exception_return = NextInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_HypervisorCall); exception.syndrome<15:0> = immediate; if PSTATE.EL == EL3 then AArch64.TakeException(EL3, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/syscalls/AArch64.CallSecureMonitor

// AArch64.CallSecureMonitor() // =========================== AArch64.CallSecureMonitor(bits(16) immediate) assert HaveEL(EL3) && !ELUsingAArch32(EL3); if UsingAArch32() then AArch32.ITAdvance(); SSAdvance(); bits(64) preferred_exception_return = NextInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_MonitorCall); exception.syndrome<15:0> = immediate; AArch64.TakeException(EL3, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/syscalls/AArch64.CallSupervisor

// AArch64.CallSupervisor() // ======================== // Calls the Supervisor AArch64.CallSupervisor(bits(16) immediate) if UsingAArch32() then AArch32.ITAdvance(); SSAdvance(); route_to_el2 = EL2Enabled() && PSTATE.EL == EL0 && HCR_EL2.TGE == '1'; bits(64) preferred_exception_return = NextInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_SupervisorCall); exception.syndrome<15:0> = immediate; if UInt(PSTATE.EL) > UInt(EL1) then AArch64.TakeException(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/takeexception/AArch64.TakeException

// AArch64.TakeException() // ======================= // Take an exception to an Exception Level using AArch64. AArch64.TakeException(bits(2) target_el, ExceptionRecord exception, bits(64) preferred_exception_return, integer vect_offset) assert HaveEL(target_el) && !ELUsingAArch32(target_el) && UInt(target_el) >= UInt(PSTATE.EL); sync_errors = HaveIESB() && SCTLR[].IESB == '1'; if HaveDoubleFaultExt() then sync_errors = sync_errors || (SCR_EL3.EA == '1' && SCR_EL3.NMEA == '1' && PSTATE.EL == EL3); if sync_errors && InsertIESBBeforeException(target_el) then SynchronizeErrors(); iesb_req = FALSE; sync_errors = FALSE; TakeUnmaskedPhysicalSErrorInterrupts(iesb_req); SynchronizeContext(); // If coming from AArch32 state, the top parts of the X[] registers might be set to zero from_32 = UsingAArch32(); if from_32 then AArch64.MaybeZeroRegisterUppers(); MaybeZeroSVEUppers(target_el); if UInt(target_el) > UInt(PSTATE.EL) then boolean lower_32; if target_el == EL3 then if EL2Enabled() then lower_32 = ELUsingAArch32(EL2); else lower_32 = ELUsingAArch32(EL1); elsif IsInHost() && PSTATE.EL == EL0 && target_el == EL2 then lower_32 = ELUsingAArch32(EL0); else lower_32 = ELUsingAArch32(target_el - 1); vect_offset = vect_offset + (if lower_32 then 0x600 else 0x400); elsif PSTATE.SP == '1' then vect_offset = vect_offset + 0x200; spsr = GetPSRFromPSTATE(); if PSTATE.EL == EL1 && target_el == EL1 && HaveNVExt() && EL2Enabled() && HCR_EL2.<NV, NV1> == '10' then spsr<3:2> = '10'; if HaveUAOExt() then PSTATE.UAO = '0'; if !(exception.type IN {Exception_IRQ, Exception_FIQ}) then AArch64.ReportException(exception, target_el); PSTATE.EL = target_el; PSTATE.nRW = '0'; PSTATE.SP = '1'; if HaveBTIExt() then if ( exception.type IN {Exception_SError, Exception_IRQ, Exception_FIQ, Exception_SoftwareStep, Exception_PCAlignment, Exception_InstructionAbort, Exception_SoftwareBreakpoint, Exception_IllegalState, Exception_BranchTarget} ) then spsr_btype = PSTATE.BTYPE; else spsr_btype = if ConstrainUnpredictableBool(Unpredictable_ZEROBTYPE) then '00' else PSTATE.BTYPE; spsr<11:10> = spsr_btype; SPSR[] = spsr; ELR[] = preferred_exception_return; if HaveSSBSExt() then PSTATE.SSBS = SCTLR[].DSSBS; if HaveBTIExt() then PSTATE.BTYPE = '00'; PSTATE.SS = '0'; PSTATE.<D,A,I,F> = '1111'; PSTATE.IL = '0'; if from_32 then // Coming from AArch32 PSTATE.IT = '00000000'; PSTATE.T = '0'; // PSTATE.J is RES0 if HavePANExt() && (PSTATE.EL == EL1 || (PSTATE.EL == EL2 && ELIsInHost(EL0))) && SCTLR[].SPAN == '0' then PSTATE.PAN = '1'; if HaveMTEExt() then PSTATE.TCO = '1'; BranchTo(VBAR[]<63:11>:vect_offset<10:0>, BranchType_EXCEPTION); if sync_errors then SynchronizeErrors(); iesb_req = TRUE; TakeUnmaskedPhysicalSErrorInterrupts(iesb_req); EndOfInstruction();

Library pseudocode for aarch64/exceptions/traps/AArch64.AArch32SystemAccessTrap

// AArch64.AArch32SystemAccessTrap() // ================================= // Trapped AArch32 System register access other than due to CPTR_EL2 or CPACR_EL1. AArch64.AArch32SystemAccessTrap(bits(2) target_el, bits(32) aarch32_instr) assert HaveEL(target_el) && target_el != EL0 && UInt(target_el) >= UInt(PSTATE.EL); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = AArch64.AArch32SystemAccessTrapSyndrome(aarch32_instr); if target_el == EL1 && EL2Enabled() && HCR_EL2.TGE == '1' then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(target_el, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/traps/AArch64.AArch32SystemAccessTrapSyndrome

// AArch64.AArch32SystemAccessTrapSyndrome() // ========================================= // Return the syndrome information for traps on AArch32 MCR, MCRR, MRC, MRRC, and VMRS instructions, // other than traps that are due to HCPTR or CPACR. ExceptionRecord AArch64.AArch32SystemAccessTrapSyndrome(bits(32) instr) ExceptionRecord exception; cpnum = UInt(instr<11:8>); bits(20) iss = Zeros(); if instr<27:24> == '1110' && instr<4> == '1' && instr<31:28> != '1111' then // MRC/MCR case cpnum of when 10 exception = ExceptionSyndrome(Exception_FPIDTrap); when 14 exception = ExceptionSyndrome(Exception_CP14RTTrap); when 15 exception = ExceptionSyndrome(Exception_CP15RTTrap); otherwise Unreachable(); iss<19:17> = instr<7:5>; // opc2 iss<16:14> = instr<23:21>; // opc1 iss<13:10> = instr<19:16>; // CRn if instr<20> == '1' && instr<15:12> == '1111' then // MRC, Rt==15 iss<9:5> = '11111'; elsif instr<20> == '0' && instr<15:12> == '1111' then // MCR, Rt==15 iss<9:5> = bits(5) UNKNOWN; else iss<9:5> = LookUpRIndex(UInt(instr<15:12>), PSTATE.M)<4:0>; iss<4:1> = instr<3:0>; // CRm elsif instr<27:21> == '1100010' && instr<31:28> != '1111' then // MRRC/MCRR case cpnum of when 14 exception = ExceptionSyndrome(Exception_CP14RRTTrap); when 15 exception = ExceptionSyndrome(Exception_CP15RRTTrap); otherwise Unreachable(); iss<19:16> = instr<7:4>; // opc1 if instr<19:16> == '1111' then // Rt2==15 iss<14:10> = bits(5) UNKNOWN; else iss<14:10> = LookUpRIndex(UInt(instr<19:16>), PSTATE.M)<4:0>; if instr<15:12> == '1111' then // Rt==15 iss<9:5> = bits(5) UNKNOWN; else iss<9:5> = LookUpRIndex(UInt(instr<15:12>), PSTATE.M)<4:0>; iss<4:1> = instr<3:0>; // CRm elsif instr<27:25> == '110' && instr<31:28> != '1111' then // LDC/STC assert cpnum == 14; exception = ExceptionSyndrome(Exception_CP14DTTrap); iss<19:12> = instr<7:0>; // imm8 iss<4> = instr<23>; // U iss<2:1> = instr<24,21>; // P,W if instr<19:16> == '1111' then // Rn==15, LDC(Literal addressing)/STC iss<9:5> = bits(5) UNKNOWN; iss<3> = '1'; else iss<9:5> = LookUpRIndex(UInt(instr<19:16>), PSTATE.M)<4:0>; // Rn iss<3> = '0'; else Unreachable(); iss<0> = instr<20>; // Direction exception.syndrome<24:20> = ConditionSyndrome(); exception.syndrome<19:0> = iss; return exception;

Library pseudocode for aarch64/exceptions/traps/AArch64.AdvSIMDFPAccessTrap

// AArch64.AdvSIMDFPAccessTrap() // ============================= // Trapped access to Advanced SIMD or FP registers due to CPACR[]. AArch64.AdvSIMDFPAccessTrap(bits(2) target_el) bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; route_to_el2 = (target_el == EL1 && EL2Enabled() && HCR_EL2.TGE == '1'); if route_to_el2 then exception = ExceptionSyndrome(Exception_Uncategorized); AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else exception = ExceptionSyndrome(Exception_AdvSIMDFPAccessTrap); exception.syndrome<24:20> = ConditionSyndrome(); AArch64.TakeException(target_el, exception, preferred_exception_return, vect_offset); return;

Library pseudocode for aarch64/exceptions/traps/AArch64.CheckAArch32SystemAccess

// AArch64.CheckAArch32SystemAccess() // ================================== // Check AArch32 System register access instruction for enables and disables AArch64.CheckAArch32SystemAccess(bits(32) instr) cp_num = UInt(instr<11:8>); assert cp_num IN {14,15}; // Decode the AArch32 System register access instruction if instr<31:28> != '1111' && instr<27:24> == '1110' && instr<4> == '1' then // MRC/MCR cprt = TRUE; cpdt = FALSE; nreg = 1; opc1 = UInt(instr<23:21>); opc2 = UInt(instr<7:5>); CRn = UInt(instr<19:16>); CRm = UInt(instr<3:0>); elsif instr<31:28> != '1111' && instr<27:21> == '1100010' then // MRRC/MCRR cprt = TRUE; cpdt = FALSE; nreg = 2; opc1 = UInt(instr<7:4>); CRm = UInt(instr<3:0>); elsif instr<31:28> != '1111' && instr<27:25> == '110' && instr<22> == '0' then // LDC/STC cprt = FALSE; cpdt = TRUE; nreg = 0; opc1 = 0; CRn = UInt(instr<15:12>); else allocated = FALSE; // // Coarse-grain decode into CP14 or CP15 encoding space. Each of the CPxxxInstrDecode functions // returns TRUE if the instruction is allocated at the current Exception level, FALSE otherwise. if cp_num == 14 then // LDC and STC only supported for c5 in CP14 encoding space if cpdt && CRn != 5 then allocated = FALSE; else // Coarse-grained decode of CP14 based on opc1 field case opc1 of when 0 allocated = CP14DebugInstrDecode(instr); when 1 allocated = CP14TraceInstrDecode(instr); when 7 allocated = CP14JazelleInstrDecode(instr); // JIDR only otherwise allocated = FALSE; // All other values are unallocated elsif cp_num == 15 then // LDC and STC not supported in CP15 encoding space if !cprt then allocated = FALSE; else allocated = CP15InstrDecode(instr); // Coarse-grain traps to EL2 have a higher priority than exceptions generated because // the access instruction is UNDEFINED if AArch64.CheckCP15InstrCoarseTraps(CRn, nreg, CRm) then // For a coarse-grain trap, if it is IMPLEMENTATION DEFINED whether an access from // User mode is UNDEFINED when the trap is disabled, then it is // IMPLEMENTATION DEFINED whether the same access is UNDEFINED or generates a trap // when the trap is enabled. if PSTATE.EL == EL0 && EL2Enabled() && !allocated then if boolean IMPLEMENTATION_DEFINED "UNDEF unallocated CP15 access at EL0" then UNDEFINED; AArch64.AArch32SystemAccessTrap(EL2, instr); else allocated = FALSE; if !allocated then UNDEFINED; // If the instruction is not UNDEFINED, it might be disabled or trapped to a higher EL. AArch64.CheckAArch32SystemAccessTraps(instr); return;

Library pseudocode for aarch64/exceptions/traps/AArch64.CheckAArch32SystemAccessEL1Traps

// AArch64.CheckAArch32SystemAccessEL1Traps() // ========================================== // Check for configurable disables or traps to EL1 or EL2 of an AArch32 System register // access instruction. AArch64.CheckAArch32SystemAccessEL1Traps(bits(32) instr) assert PSTATE.EL == EL0; trap = FALSE; // Decode the AArch32 System register access instruction (op, cp_num, opc1, CRn, CRm, opc2, write) = AArch32.DecodeSysRegAccess(instr); if cp_num == 14 then if ((op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm == 5 && opc2 == 0) || // DBGDTRRXint/DBGDTRTXint (op == SystemAccessType_DT && CRn == 5 && opc2 == 0)) then // DBGDTRRXint/DBGDTRTXint (STC/LDC) trap = !Halted() && MDSCR_EL1.TDCC == '1'; elsif opc1 == 0 then trap = MDSCR_EL1.TDCC == '1'; elsif opc1 == 1 then trap = CPACR[].TTA == '1'; elsif cp_num == 15 then if ((op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 12 && opc2 == 0) || // PMCR (op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 12 && opc2 == 1) || // PMCNTENSET (op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 12 && opc2 == 2) || // PMCNTENCLR (op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 12 && opc2 == 3) || // PMOVSR (op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 12 && opc2 == 6) || // PMCEID0 (op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 12 && opc2 == 7) || // PMCEID1 (op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 13 && opc2 == 1) || // PMXEVTYPER (op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 14 && opc2 == 3) || // PMOVSSET (op == SystemAccessType_RT && opc1 == 0 && CRn == 14 && CRm >= 12)) then // PMEVTYPER<n> trap = PMUSERENR_EL0.EN == '0'; elsif op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 14 && opc2 == 4 then // PMSWINC trap = PMUSERENR_EL0.EN == '0' && PMUSERENR_EL0.SW == '0'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 13 && opc2 == 0) || // PMCCNTR (op == SystemAccessType_RRT && opc1 == 0 && CRm == 9)) then // PMCCNTR (MRRC/MCRR) trap = PMUSERENR_EL0.EN == '0' && (write || PMUSERENR_EL0.CR == '0'); elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 13 && opc2 == 2) || // PMXEVCNTR (op == SystemAccessType_RT && opc1 == 0 && CRn == 14 && CRm >= 8 && CRm <= 11)) then // PMEVCNTR<n> trap = PMUSERENR_EL0.EN == '0' && (write || PMUSERENR_EL0.ER == '0'); elsif op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 12 && opc2 == 5 then // PMSELR trap = PMUSERENR_EL0.EN == '0' && PMUSERENR_EL0.ER == '0'; elsif op == SystemAccessType_RT && opc1 == 0 && CRn == 14 && CRm == 2 && opc2 IN {0,1,2} then // CNTP_TVAL CNTP_CTL CNTP_CVAL trap = CNTKCTL[].EL0PTEN == '0'; elsif op == SystemAccessType_RT && opc1 == 0 && CRn == 14 && CRm == 0 && opc2 == 0 then // CNTFRQ trap = CNTKCTL[].EL0PTEN == '0' && CNTKCTL[].EL0VCTEN == '0'; elsif op == SystemAccessType_RRT && opc1 == 1 && CRm == 14 then // CNTVCT trap = CNTKCTL[].EL0VCTEN == '0'; if trap then AArch64.AArch32SystemAccessTrap(EL1, instr);

Library pseudocode for aarch64/exceptions/traps/AArch64.CheckAArch32SystemAccessEL2Traps

// AArch64.CheckAArch32SystemAccessEL2Traps() // ========================================== // Check for configurable traps to EL2 of an AArch32 System register access instruction. AArch64.CheckAArch32SystemAccessEL2Traps(bits(32) instr) assert EL2Enabled() && PSTATE.EL IN {EL0, EL1, EL2}; trap = FALSE; // Decode the AArch32 System register access instruction (op, cp_num, opc1, CRn, CRm, opc2, write) = AArch32.DecodeSysRegAccess(instr); if cp_num == 14 && PSTATE.EL IN {EL0, EL1} then if ((op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 0 && opc2 == 0) || // DBGDRAR (op == SystemAccessType_RRT && opc1 == 0 && CRm == 1) || // DBGDRAR (MRRC) (op == SystemAccessType_RT && opc1 == 0 && CRn == 2 && CRm == 0 && opc2 == 0) || // DBGDSAR (op == SystemAccessType_RRT && opc1 == 0 && CRm == 2)) then // DBGDSAR (MRRC) trap = MDCR_EL2.TDRA == '1' || MDCR_EL2.TDE == '1' || HCR_EL2.TGE == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 0 && opc2 == 4) || // DBGOSLAR (op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 1 && opc2 == 4) || // DBGOSLSR (op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 3 && opc2 == 4) || // DBGOSDLR (op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 4 && opc2 == 4)) then // DBGPRCR trap = MDCR_EL2.TDOSA == '1' || MDCR_EL2.TDE == '1' || HCR_EL2.TGE == '1'; elsif opc1 == 0 && (!Halted() || !(op == SystemAccessType_RT && CRn == 0 && CRm == 5 && opc2 == 0)) then trap = MDCR_EL2.TDA == '1' || MDCR_EL2.TDE == '1' || HCR_EL2.TGE == '1'; elsif opc1 == 1 then trap = CPTR_EL2.TTA == '1'; elsif op == SystemAccessType_RT && opc1 == 7 && CRn == 0 && CRm == 0 && opc2 == 0 then // JIDR trap = HCR_EL2.TID0 == '1'; elsif cp_num == 14 && PSTATE.EL == EL2 then if opc1 == 1 then trap = CPTR_EL2.TTA == '1'; elsif cp_num == 15 && PSTATE.EL IN {EL0, EL1} then if ((op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 0 && opc2 == 0) || // SCTLR (op == SystemAccessType_RT && opc1 == 0 && CRn == 2 && CRm == 0 && opc2 == 0) || // TTBR0 (op == SystemAccessType_RRT && opc1 == 0 && CRm == 2) || // TTBR0 (MRRC/MCCR) (op == SystemAccessType_RT && opc1 == 0 && CRn == 2 && CRm == 0 && opc2 == 1) || // TTBR1 (op == SystemAccessType_RRT && opc1 == 1 && CRm == 2) || // TTBR1 (MRRC/MCCR) (op == SystemAccessType_RT && opc1 == 0 && CRn == 2 && CRm == 0 && opc2 == 2) || // TTBCR (op == SystemAccessType_RT && opc1 == 0 && CRn == 2 && CRm == 0 && opc2 == 3) || // TTBCR2 (op == SystemAccessType_RT && opc1 == 0 && CRn == 3 && CRm == 0 && opc2 == 0) || // DACR (op == SystemAccessType_RT && opc1 == 0 && CRn == 5 && CRm == 0 && opc2 == 0) || // DFSR (op == SystemAccessType_RT && opc1 == 0 && CRn == 5 && CRm == 0 && opc2 == 1) || // IFSR (op == SystemAccessType_RT && opc1 == 0 && CRn == 6 && CRm == 0 && opc2 == 0) || // DFAR (op == SystemAccessType_RT && opc1 == 0 && CRn == 6 && CRm == 0 && opc2 == 2) || // IFAR (op == SystemAccessType_RT && opc1 == 0 && CRn == 5 && CRm == 1 && opc2 == 0) || // ADFSR (op == SystemAccessType_RT && opc1 == 0 && CRn == 5 && CRm == 1 && opc2 == 1) || // AIFSR (op == SystemAccessType_RT && opc1 == 0 && CRn == 10 && CRm == 2 && opc2 == 0) || // PRRR/MAIR0 (op == SystemAccessType_RT && opc1 == 0 && CRn == 10 && CRm == 2 && opc2 == 1) || // NMRR/MAIR1 (op == SystemAccessType_RT && opc1 == 0 && CRn == 10 && CRm == 3 && opc2 == 0) || // AMAIR0 (op == SystemAccessType_RT && opc1 == 0 && CRn == 10 && CRm == 3 && opc2 == 1) || // AMAIR1 (op == SystemAccessType_RT && opc1 == 0 && CRn == 13 && CRm == 0 && opc2 == 1)) then // CONTEXTIDR trap = if write then HCR_EL2.TVM == '1' else HCR_EL2.TRVM == '1'; elsif op == SystemAccessType_RT && opc1 == 0 && CRn == 8 then // TLBI trap = write && HCR_EL2.TTLB == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 6 && opc2 == 2) || // DCISW (op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 10 && opc2 == 2) || // DCCSW (op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 14 && opc2 == 2)) then // DCCISW trap = write && HCR_EL2.TSW == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 6 && opc2 == 1) || // DCIMVAC (op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 10 && opc2 == 1) || // DCCMVAC (op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 14 && opc2 == 1)) then // DCCIMVAC trap = write && HCR_EL2.TPCP == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 5 && opc2 == 1) || // ICIMVAU (op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 5 && opc2 == 0) || // ICIALLU (op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 1 && opc2 == 0) || // ICIALLUIS (op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 11 && opc2 == 1)) then // DCCMVAU trap = write && HCR_EL2.TPU == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 0 && opc2 == 1) || // ACTLR (op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 0 && opc2 == 3)) then // ACTLR2 trap = HCR_EL2.TACR == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm == 0 && opc2 == 2) || // TCMTR (op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm == 0 && opc2 == 3) || // TLBTR (op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm == 0 && opc2 == 6) || // REVIDR (op == SystemAccessType_RT && opc1 == 1 && CRn == 0 && CRm == 0 && opc2 == 7)) then // AIDR trap = HCR_EL2.TID1 == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm == 0 && opc2 == 1) || // CTR (op == SystemAccessType_RT && opc1 == 1 && CRn == 0 && CRm == 0 && opc2 == 0) || // CCSIDR (op == SystemAccessType_RT && opc1 == 1 && CRn == 0 && CRm == 0 && opc2 == 2) || // CCSIDR2 (op == SystemAccessType_RT && opc1 == 1 && CRn == 0 && CRm == 0 && opc2 == 1) || // CLIDR (op == SystemAccessType_RT && opc1 == 2 && CRn == 0 && CRm == 0 && opc2 == 0)) then // CSSELR trap = HCR_EL2.TID2 == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm == 1) || // ID_* (op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm == 2 && opc2 <= 7) || // ID_* (op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm >= 3 && opc2 <= 1) || // Reserved (op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm == 3 && opc2 == 2) || // Reserved (op == SystemAccessType_RT && opc1 == 0 && CRn == 0 && CRm == 5 && opc2 IN {4,5})) then // Reserved trap = HCR_EL2.TID3 == '1'; elsif op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 0 && opc2 == 2 then // CPACR trap = CPTR_EL2.TCPAC == '1'; elsif op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm == 12 && opc2 == 0 then // PMCR trap = MDCR_EL2.TPMCR == '1' || MDCR_EL2.TPM == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 14 && CRm >= 8) || // PMEVCNTR<n>/PMEVTYPER<n> (op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm IN {12,13,14}) || // PM* (op == SystemAccessType_RRT && opc1 == 0 && CRm == 9)) then // PMCCNTR (MRRC/MCCR) trap = MDCR_EL2.TPM == '1'; elsif op == SystemAccessType_RT && opc1 == 0 && CRn == 14 && CRm == 2 && opc2 IN {0,1,2} then // CNTP_TVAL CNTP_CTL CNTP_CVAL if !HaveVirtHostExt() || HCR_EL2.E2H == '0' then trap = CNTHCTL_EL2.EL1PCEN == '0'; else trap = CNTHCTL_EL2.EL1PTEN == '0'; elsif op == SystemAccessType_RRT && opc1 == 0 && CRm == 14 then // CNTPCT trap = CNTHCTL_EL2.EL1PCTEN == '0'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 1 && opc2 == 0) || // SCR (op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 1 && opc2 == 2) || // NSACR (op == SystemAccessType_RT && opc1 == 0 && CRn == 12 && CRm == 0 && opc2 == 1) || // MVBAR (op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 3 && opc2 == 1) || // SDCR (op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 8 && opc2 >= 4)) then // ATS12NSOxx trap = IsSecureEL2Enabled() && PSTATE.EL == EL1 && IsSecure() && ELUsingAArch32(EL1); if trap then AArch64.AArch32SystemAccessTrap(EL2, instr);

Library pseudocode for aarch64/exceptions/traps/AArch64.CheckAArch32SystemAccessEL3Traps

// AArch64.CheckAArch32SystemAccessEL3Traps() // ========================================== // Check for configurable traps to EL3 of an AArch32 System register access instruction. AArch64.CheckAArch32SystemAccessEL3Traps(bits(32) instr) assert HaveEL(EL3) && PSTATE.EL != EL3; // Decode the AArch32 System register access instruction (op, cp_num, opc1, CRn, CRm, opc2, write) = AArch32.DecodeSysRegAccess(instr); trap = FALSE; if cp_num == 14 then if ((op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 0 && opc2 == 4 && !write) || // DBGOSLAR (op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 1 && opc2 == 4 && write) || // DBGOSLSR (op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 3 && opc2 == 4) || // DBGOSDLR (op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 4 && opc2 == 4)) then // DBGPRCR trap = MDCR_EL3.TDOSA == '1'; elsif opc1 == 0 && (!Halted() || !(op == SystemAccessType_RT && CRn == 0 && CRm == 5 && opc2 == 0)) then trap = MDCR_EL3.TDA == '1'; elsif opc1 == 1 then trap = CPTR_EL3.TTA == '1'; elsif cp_num == 15 then if ((op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 1 && opc2 == 0) || // SCR (op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 1 && opc2 == 2) || // NSACR (op == SystemAccessType_RT && opc1 == 0 && CRn == 12 && CRm == 0 && opc2 == 1) || // MVBAR (op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 3 && opc2 == 1) || // SDCR (op == SystemAccessType_RT && opc1 == 0 && CRn == 7 && CRm == 8 && opc2 >= 4)) then // ATS12NSOxx trap = PSTATE.EL == EL1 && IsSecure(); elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 1 && CRm == 0 && opc2 == 2) || // CPACR (op == SystemAccessType_RT && opc1 == 4 && CRn == 1 && CRm == 1 && opc2 == 2)) then // HCPTR trap = CPTR_EL3.TCPAC == '1'; elsif ((op == SystemAccessType_RT && opc1 == 0 && CRn == 14 && CRm >= 8) || // PMEVCNTR<n>/PMEVTYPER<n> (op == SystemAccessType_RT && opc1 == 0 && CRn == 9 && CRm IN {12,13,14}) || // PM* (op == SystemAccessType_RRT && opc1 == 0 && CRm == 9)) then // PMCCNTR (MRRC/MCCR) trap = MDCR_EL3.TPM == '1'; if trap then AArch64.AArch32SystemAccessTrap(EL3, instr);

Library pseudocode for aarch64/exceptions/traps/AArch64.CheckAArch32SystemAccessTraps

// AArch64.CheckAArch32SystemAccessTraps() // ======================================= // Check for configurable disables or traps to a higher EL of an AArch32 System register access. AArch64.CheckAArch32SystemAccessTraps(bits(32) instr) if PSTATE.EL == EL0 then AArch64.CheckAArch32SystemAccessEL1Traps(instr); if EL2Enabled() && PSTATE.EL IN {EL0, EL1, EL2} && !IsInHost() then AArch64.CheckAArch32SystemAccessEL2Traps(instr); AArch64.CheckAArch32SystemAccessEL3Traps(instr);

Library pseudocode for aarch64/exceptions/traps/AArch64.CheckCP15InstrCoarseTraps

// AArch64.CheckCP15InstrCoarseTraps() // =================================== // Check for coarse-grained AArch32 CP15 traps in HSTR_EL2 and HCR_EL2. boolean AArch64.CheckCP15InstrCoarseTraps(integer CRn, integer nreg, integer CRm) // Check for coarse-grained Hyp traps if EL2Enabled() && PSTATE.EL IN {EL0,EL1} then // Check for MCR, MRC, MCRR and MRRC disabled by HSTR_EL2<CRn/CRm> major = if nreg == 1 then CRn else CRm; if !IsInHost() && !(major IN {4,14}) && HSTR_EL2<major> == '1' then return TRUE; // Check for MRC and MCR disabled by HCR_EL2.TIDCP if (HCR_EL2.TIDCP == '1' && nreg == 1 && ((CRn == 9 && CRm IN {0,1,2, 5,6,7,8 }) || (CRn == 10 && CRm IN {0,1, 4, 8 }) || (CRn == 11 && CRm IN {0,1,2,3,4,5,6,7,8,15}))) then return TRUE; return FALSE;

Library pseudocode for aarch64/exceptions/traps/AArch64.CheckFPAdvSIMDEnabled

// AArch64.CheckFPAdvSIMDEnabled() // =============================== // Check against CPACR[] AArch64.CheckFPAdvSIMDEnabled() if PSTATE.EL IN {EL0, EL1} && !IsInHost() then // Check if access disabled in CPACR_EL1 case CPACR[].FPEN of when 'x0' disabled = TRUE; when '01' disabled = PSTATE.EL == EL0; when '11' disabled = FALSE; if disabled then AArch64.AdvSIMDFPAccessTrap(EL1); AArch64.CheckFPAdvSIMDTrap(); // Also check against CPTR_EL2 and CPTR_EL3

Library pseudocode for aarch64/exceptions/traps/AArch64.CheckFPAdvSIMDTrap

// AArch64.CheckFPAdvSIMDTrap() // ============================ // Check against CPTR_EL2 and CPTR_EL3. AArch64.CheckFPAdvSIMDTrap() if EL2Enabled() then // Check if access disabled in CPTR_EL2 if HaveVirtHostExt() && HCR_EL2.E2H == '1' then case CPTR_EL2.FPEN of when 'x0' disabled = !(PSTATE.EL == EL1 && HCR_EL2.TGE == '1'); when '01' disabled = (PSTATE.EL == EL0 && HCR_EL2.TGE == '1'); when '11' disabled = FALSE; if disabled then AArch64.AdvSIMDFPAccessTrap(EL2); else if CPTR_EL2.TFP == '1' then AArch64.AdvSIMDFPAccessTrap(EL2); if HaveEL(EL3) then // Check if access disabled in CPTR_EL3 if CPTR_EL3.TFP == '1' then AArch64.AdvSIMDFPAccessTrap(EL3); return;

Library pseudocode for aarch64/exceptions/traps/AArch64.CheckForERetTrap

// AArch64.CheckForERetTrap() // ========================== // Check for trap on ERET, ERETAA, ERETAB instruction AArch64.CheckForERetTrap(boolean eret_with_pac, boolean pac_uses_key_a) // Non-secure EL1 execution of ERET, ERETAA, ERETAB when HCR_EL2.NV bit is set, is trapped to EL2 route_to_el2 = HaveNVExt() && EL2Enabled() && PSTATE.EL == EL1 && HCR_EL2.NV == '1'; if route_to_el2 then ExceptionRecord exception; bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_ERetTrap); if !eret_with_pac then // ERET exception.syndrome<1> = '0'; exception.syndrome<0> = '0'; // RES0 else exception.syndrome<1> = '1'; if pac_uses_key_a then // ERETAA exception.syndrome<0> = '0'; else // ERETAB exception.syndrome<0> = '1'; AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/traps/AArch64.CheckForSMCUndefOrTrap

// AArch64.CheckForSMCUndefOrTrap() // ================================ // Check for UNDEFINED or trap on SMC instruction AArch64.CheckForSMCUndefOrTrap(bits(16) imm) route_to_el2 = EL2Enabled() && PSTATE.EL == EL1 && HCR_EL2.TSC == '1'; if PSTATE.EL == EL0 then UNDEFINED; if !HaveEL(EL3) then if EL2Enabled() && PSTATE.EL == EL1 then if HaveNVExt() && HCR_EL2.NV == '1' && HCR_EL2.TSC == '1' then route_to_el2 = TRUE; else UNDEFINED; else UNDEFINED; else route_to_el2 = EL2Enabled() && PSTATE.EL == EL1 && HCR_EL2.TSC == '1'; if route_to_el2 then bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_MonitorCall); exception.syndrome<15:0> = imm; AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/traps/AArch64.CheckForWFxTrap

// AArch64.CheckForWFxTrap() // ========================= // Check for trap on WFE or WFI instruction AArch64.CheckForWFxTrap(bits(2) target_el, boolean is_wfe) assert HaveEL(target_el); case target_el of when EL1 trap = (if is_wfe then SCTLR[].nTWE else SCTLR[].nTWI) == '0'; when EL2 trap = (if is_wfe then HCR_EL2.TWE else HCR_EL2.TWI) == '1'; when EL3 trap = (if is_wfe then SCR_EL3.TWE else SCR_EL3.TWI) == '1'; if trap then AArch64.WFxTrap(target_el, is_wfe);

Library pseudocode for aarch64/exceptions/traps/AArch64.CheckIllegalState

// AArch64.CheckIllegalState() // =========================== // Check PSTATE.IL bit and generate Illegal Execution state exception if set. AArch64.CheckIllegalState() if PSTATE.IL == '1' then route_to_el2 = EL2Enabled() && PSTATE.EL == EL0 && HCR_EL2.TGE == '1'; bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_IllegalState); if UInt(PSTATE.EL) > UInt(EL1) then AArch64.TakeException(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/traps/AArch64.MonitorModeTrap

// AArch64.MonitorModeTrap() // ========================= // Trapped use of Monitor mode features in a Secure EL1 AArch32 mode AArch64.MonitorModeTrap() bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_Uncategorized); if IsSecureEL2Enabled() then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); AArch64.TakeException(EL3, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/traps/AArch64.SystemRegisterTrap

// AArch64.SystemRegisterTrap() // ============================ // Trapped system register access other than due to CPTR_EL2 and CPACR_EL1 AArch64.SystemRegisterTrap(bits(2) target_el, bits(2) op0, bits(3) op2, bits(3) op1, bits(4) crn, bits(5) rt, bits(4) crm, bit dir) assert UInt(target_el) >= UInt(PSTATE.EL); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_SystemRegisterTrap); exception.syndrome<21:20> = op0; exception.syndrome<19:17> = op2; exception.syndrome<16:14> = op1; exception.syndrome<13:10> = crn; exception.syndrome<9:5> = rt; exception.syndrome<4:1> = crm; exception.syndrome<0> = dir; if target_el == EL1 && EL2Enabled() && HCR_EL2.TGE == '1' then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(target_el, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/traps/AArch64.UndefinedFault

// AArch64.UndefinedFault() // ======================== AArch64.UndefinedFault() route_to_el2 = EL2Enabled() && PSTATE.EL == EL0 && HCR_EL2.TGE == '1'; bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_Uncategorized); if UInt(PSTATE.EL) > UInt(EL1) then AArch64.TakeException(PSTATE.EL, exception, preferred_exception_return, vect_offset); elsif route_to_el2 then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(EL1, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/traps/AArch64.WFxTrap

// AArch64.WFxTrap() // ================= AArch64.WFxTrap(bits(2) target_el, boolean is_wfe) assert UInt(target_el) > UInt(PSTATE.EL); bits(64) preferred_exception_return = ThisInstrAddr(); vect_offset = 0x0; exception = ExceptionSyndrome(Exception_WFxTrap); exception.syndrome<24:20> = ConditionSyndrome(); exception.syndrome<0> = if is_wfe then '1' else '0'; if target_el == EL1 && EL2Enabled() && HCR_EL2.TGE == '1' then AArch64.TakeException(EL2, exception, preferred_exception_return, vect_offset); else AArch64.TakeException(target_el, exception, preferred_exception_return, vect_offset);

Library pseudocode for aarch64/exceptions/traps/CheckFPAdvSIMDEnabled64

// CheckFPAdvSIMDEnabled64() // ========================= // AArch64 instruction wrapper CheckFPAdvSIMDEnabled64() AArch64.CheckFPAdvSIMDEnabled();

Library pseudocode for aarch64/functions/aborts/AArch64.CreateFaultRecord

// AArch64.CreateFaultRecord() // =========================== FaultRecord AArch64.CreateFaultRecord(Fault type, bits(52) ipaddress, bit NS, integer level, AccType acctype, boolean write, bit extflag, bits(2) errortype, boolean secondstage, boolean s2fs1walk) FaultRecord fault; fault.type = type; fault.domain = bits(4) UNKNOWN; // Not used from AArch64 fault.debugmoe = bits(4) UNKNOWN; // Not used from AArch64 fault.errortype = errortype; fault.ipaddress.NS = NS; fault.ipaddress.address = ipaddress; fault.level = level; fault.acctype = acctype; fault.write = write; fault.extflag = extflag; fault.secondstage = secondstage; fault.s2fs1walk = s2fs1walk; return fault;

Library pseudocode for aarch64/functions/aborts/AArch64.FaultSyndrome

// AArch64.FaultSyndrome() // ======================= // Creates an exception syndrome value for Abort and Watchpoint exceptions taken to // an Exception Level using AArch64. bits(25) AArch64.FaultSyndrome(boolean d_side, FaultRecord fault) assert fault.type != Fault_None; bits(25) iss = Zeros(); if HaveRASExt() && IsExternalSyncAbort(fault) then iss<12:11> = fault.errortype; // SET if d_side then if IsSecondStage(fault) && !fault.s2fs1walk then iss<24:14> = LSInstructionSyndrome(); if HaveNV2Ext() && fault.acctype == AccType_NV2REGISTER then iss<13> = '1'; // Value of '1' indicates fault is generated by use of VNCR_EL2 if fault.acctype IN {AccType_DC, AccType_DC_UNPRIV, AccType_IC, AccType_AT} then iss<8> = '1'; iss<6> = '1'; else iss<6> = if fault.write then '1' else '0'; if IsExternalAbort(fault) then iss<9> = fault.extflag; iss<7> = if fault.s2fs1walk then '1' else '0'; iss<5:0> = EncodeLDFSC(fault.type, fault.level); return iss;

Library pseudocode for aarch64/functions/exclusive/AArch64.ExclusiveMonitorsPass

// AArch64.ExclusiveMonitorsPass() // =============================== // Return TRUE if the Exclusives monitors for the current PE include all of the addresses // associated with the virtual address region of size bytes starting at address. // The immediately following memory write must be to the same addresses. boolean AArch64.ExclusiveMonitorsPass(bits(64) address, integer size) // It is IMPLEMENTATION DEFINED whether the detection of memory aborts happens // before or after the check on the local Exclusives monitor. As a result a failure // of the local monitor can occur on some implementations even if the memory // access would give an memory abort. acctype = AccType_ATOMIC; iswrite = TRUE; aligned = (address == Align(address, size)); if !aligned then secondstage = FALSE; AArch64.Abort(address, AArch64.AlignmentFault(acctype, iswrite, secondstage)); passed = AArch64.IsExclusiveVA(address, ProcessorID(), size); if !passed then return FALSE; memaddrdesc = AArch64.TranslateAddress(address, acctype, iswrite, aligned, size); // Check for aborts or debug exceptions if IsFault(memaddrdesc) then AArch64.Abort(address, memaddrdesc.fault); passed = IsExclusiveLocal(memaddrdesc.paddress, ProcessorID(), size); if passed then ClearExclusiveLocal(ProcessorID()); if memaddrdesc.memattrs.shareable then passed = IsExclusiveGlobal(memaddrdesc.paddress, ProcessorID(), size); return passed;

Library pseudocode for aarch64/functions/exclusive/AArch64.IsExclusiveVA

// An optional IMPLEMENTATION DEFINED test for an exclusive access to a virtual // address region of size bytes starting at address. // // It is permitted (but not required) for this function to return FALSE and // cause a store exclusive to fail if the virtual address region is not // totally included within the region recorded by MarkExclusiveVA(). // // It is