5G/NR - AI/ML  

 

 

 

AI/ML - RRC Parameters

The overall RRC design for Artificial Intelligence and Machine Learning in 3GPP Release 19 represents a fundamental shift from static, rule-based signaling to a dynamic, data-driven framework. This architecture is built on a non-critical extension of the UE capability reporting system, specifically nested within the Release 19 capability structures to ensure backward compatibility with legacy 5G NR deployments. The design philosophy centers on model observability and control, allowing the network to manage the entire lifecycle of an AI model—from initial capability discovery and activation to continuous performance monitoring and eventual fallback.

At the heart of this RRC design is the abstraction of AI models into specific functional blocks, such as channel state information prediction and beam management. Rather than standardizing the internal black-box logic of a neural network, 3GPP focuses on standardizing the interfaces: the specific input data (Set B) provided to the model, the expected output format (Set A), and the computational budget required for processing. This approach preserves vendor innovation in algorithm design while ensuring that the gNodeB and UE share a mutual understanding of the model's intent and reliability.

The signaling framework is further refined by a sophisticated resource management protocol that accounts for the unique computational demands of AI inference. By incorporating parameters for CPU pooling across carrier components and introducing relaxation timelines, the RRC design allows the UE to negotiate the necessary latency "buffer" required to complete complex mathematical operations. This ensures that the integration of AI does not compromise the strict timing requirements of the 5G air interface, but instead enhances it through proactive, multi-slot predictions of the radio environment

UE Capability

The introduction of AIML-Parameters-r19 shows the transition toward an AI/ML-based air interface. Instead of the network trying to infer how the UE sees the radio environment, the UE can now use trained models to support functions such as beam management, CSI feedback, and positioning.

The ASN.1 shown in this section follows the usual 3GPP non-critical extension structure. By placing these fields under UE-NR-Capability-v1900, 3GPP is making it clear that Release 19 is the formal starting point for standardized AI/ML capability signaling.

This matters because AI and ML in mobile networks cannot work as an uncontrolled black box. The network needs to know whether the UE is capable of AI-related processing, whether the UE’s model is still valid in the current environment, and whether the UE can contribute useful data for improving future models. This capability framework makes AI and ML a controlled and predictable part of radio resource management rather than just an experimental feature.

UE-NR-Capability-v1860 ::= SEQUENCE {

    ntn-CHO-OnlyLocationTimeTrigger-r18    ENUMERATED { supported } OPTIONAL,

    nonCriticalExtension                   UE-NR-Capability-v1900 OPTIONAL

}

UE-NR-Capability-v1900 ::= SEQUENCE {

    aiml-Parameters-r19                         AIML-Parameters-r19 OPTIONAL,

    ...

}

AIML-Parameters-r19 ::= SEQUENCE {

    applicabilityReportingCSI-r19              ENUMERATED { supported } OPTIONAL,

    applicabilityReportingOther-r19            ENUMERATED { supported } OPTIONAL,

    loggedDataCollection-r19                   ENUMERATED { supported } OPTIONAL,

    eventBasedLoggedDataCollection-r19         ENUMERATED { supported } OPTIONAL,

    dataThresholdAvailabilityIndication-r19    ENUMERATED { supported } OPTIONAL

}

Followings are short descriptions on important parameters

  • applicabilityReportingCSI-r19: Indicates whether the UE can report the applicability of its AI model specifically for Channel State Information. This helps the network determine whether the current AI model is reliable under the current radio conditions.
  • applicabilityReportingOther-r19: Indicates whether the UE can report model applicability for other AI/ML use cases, such as beam management or positioning.
  • loggedDataCollection-r19: Confirms that the UE can collect and store radio signal data for use as training data for machine learning models.
  • eventBasedLoggedDataCollection-r19: Indicates that the UE can trigger data logging only when specific radio events occur, such as a handover failure or a signal drop.
  • dataThresholdAvailabilityIndication-r19: Allows the UE to inform the network that it has collected enough data to be useful for model training or model updates.

CSI Prediction

The core idea here is that the UE does not just measure the channel as it exists at the current moment. It uses an AI model to predict how the channel will evolve in the near future, especially in high-mobility conditions where Doppler effects become significant.

Overall, the flow is straightforward. The UE first informs the network about its AI processing capability, including its CPU budget and the resource types it supports. The network then configures a prediction or monitoring session. The UE uses the configured CSI-RS resources, including parameters such as N4, as input to its AI model. It then generates predicted CSI and reports it back to the network. If additional inference time is needed, the UE can rely on the relaxation timeline so that the report is delayed just enough to allow the neural-network-based processing to complete properly.

predictionConfiguration-r19            CHOICE {

   csi-InferencePrediction-r19            NULL,

   configurationForBM-PredictionAndDataCollection-r19  SEQUENCE {

        resourcesForChannelPrediction-r19      CSI-ResourceConfigId,

        ...

   },

   configurationForBM-Monitoring-r19      SEQUENCE {

        refToPredictionConfig-r19              CSI-ReportConfigId,

        ...

   },

   configurationForCSI-Monitoring-r19     SEQUENCE {

        refToPredictionConfig-r19              CSI-ReportConfigId,

         ...

   }

}                                                                                                       

 

CA-ParametersNR-v1900 ::= SEQUENCE {

    aiml-CSI-PredictionDopplerPerBC-r19              CodebookParametersCSI-PredictionDoppler-r19 OPTIONAL,

 

    -- R1 58-0-1: CSI report framework for UE-side inference

    aiml-CSI-ReportPerBC-r19                         SEQUENCE (SIZE (1..2)) OF CPU-PoolInfo-r19 OPTIONAL,

 

    -- R1 58-3-1: CSI prediction for UE-sided inference when N4=1

    aiml-CSI-PredictionPerBC-r19                     SEQUENCE {

        supportedCSI-RS-ResourceList-r19            SEQUENCE (SIZE (1..maxNrofCSI-RS-ResourcesExt-r16)) OF INTEGER

                                                        (0..maxNrofCSI-RS-ResourcesAlt-1-r16),

        scalingFactor-r19                           ENUMERATED { n1, n2, n4 },

        numberOfOccupiedCPU-r19                     INTEGER (0..8),

        numberOfOccupiedCPUx-r19                    INTEGER (0..8),

        relaxationTimelineT-r19                     SEQUENCE {

            scs15kHz-r19                            ENUMERATED { n14, n28, n56, n112 },

            scs30kHz-r19                            ENUMERATED { n28, n56, n112, n224 },

            scs60kHz-r19                            ENUMERATED { n56, n112, n224, n448 },

            scs120kHz-r19                           ENUMERATED { n112, n224, n448 },

            scs480kHz-r19                           ENUMERATED { n448, n896, n1792 },

            scs960kHz-r19                           ENUMERATED { n896, n1792 }

        },

        occupiedResourcePool-r19                    INTEGER (1..2),

        inferenceReportType-r19                     ENUMERATED { aperiodic, semiPersistent }

    } OPTIONAL,

 

    -- R1 58-3-1a-1: DD unit size when A-CSI-RS is configured for CMR N4>1 for UE side inference of CSI prediction

    aiml-CSI-PredictionUnitDurationDD-PerBC-r19     ENUMERATED { supported } OPTIONAL,

 

    -- R1 58-3-2: CSI prediction for UE-sided inference when N4>1

    aiml-CSI-PredictionN4PerBC-r19                  SEQUENCE {

        supportedCSI-RS-ReportSettingAcrossCC-r19   SEQUENCE (SIZE (1..maxNrofCSI-RS-ResourcesExt-r16)) OF

                                                        SupportedCSI-RS-ReportSetting-r18,

        supportedCSI-RS-ReportSettingOneReport-r19  SEQUENCE (SIZE (1..maxNrofCSI-RS-ResourcesExt-r16)) OF

                                                        SupportedCSI-RS-ReportSetting-r18,

        numOccupiedCPU-r19                          INTEGER (0..8),

        numOccupiedCPUx-r19                         INTEGER (0..8),

        occupiedPool-r19                            ENUMERATED { p1, p2 }

    } OPTIONAL,

 

    -- R1 58-3-4: UE side data collection for CSI prediction

    aiml-CSI-PredictionUE-DataCollectionPerBC-r19   ENUMERATED { supported } OPTIONAL,

 

    -- R1 58-3-5: Performance monitoring for CSI prediction model

    aiml-CSI-PredictionMonitoringPerBC-r19          SEQUENCE {

        suppportedCSI-RS-ResourceList-r19           SEQUENCE (SIZE (1..maxNrofCSI-RS-ResourcesExt-r16)) OF INTEGER

                                                        (0..maxNrofCSI-RS-ResourcesAlt-1-r16),

        numOccupiedCPU-r19                          INTEGER (1..2)

    } OPTIONAL

}

 

CodebookParametersCSI-PredictionDoppler-r19 ::= SEQUENCE {

    -- R1 58-3-1b: Maximum number of aperiodic CSI-RS resources that can be configured in the same CSI report setting for Rel-16-based

    -- doppler measurement for UE side inference of CSI prediction

    maxNumberOfAperiodic-CSI-RS-Resource-r19           ENUMERATED { n4, n8, n12 } OPTIONAL,

 

    -- R1 58-3-1-2: Support R=2 for Rel-16-based doppler codebook for UE side inference of CSI prediction

    eType2DopplerR2-r19                                SEQUENCE (SIZE (1..maxNrofCSI-RS-ResourcesExt-r16)) OF INTEGER

                                                           (0..maxNrofCSI-RS-ResourcesAlt-1-r16) OPTIONAL,

 

    -- R1 58-3-1-3: Support X=1 based on first and last slot of WCSI, for Rel-16-based doppler codebook for UE side inference of CSI prediction

    eType2DopplerX1-r19                                ENUMERATED { supported } OPTIONAL,

 

    -- R1 58-3-1-3a: Support X=2 CQI based on 2 slots for Rel-16-based doppler codebook for UE side inference of CSI prediction

    eType2DopplerX2-r19                                ENUMERATED { supported } OPTIONAL,

 

    -- R1 58-3-1-4: support of l = (n – nCSI,ref ) for CSI reference slot for Rel-16 based doppler codebook for UE side inference of CSI prediction

    eType2DopplerL-N4D1-r19                            ENUMERATED { supported } OPTIONAL,

 

    -- R1 58-3-1-5: Support of L=6 for Rel-16 based doppler codebook for UE side inference of CSI prediction

    eType2DopplerL6-r19                                ENUMERATED { supported } OPTIONAL,

 

    -- R1 58-3-1-6: Support of rank equals 3 and 4 for Rel-16 based doppler codebook for UE side inference of CSI prediction

    eType2DopplerR3R4-r19                              ENUMERATED { supported } OPTIONAL,

 

    -- R1 58-3-1-7: Active CSI-RS resources and ports for mixed R16 based doppler codebook for CSI prediction via UE side model with

    -- other codebooks in any slot

    codebookComboParameterMixedTypePrediction-r19      SEQUENCE {

        type1SP-Type1SP-N4-r19               SEQUENCE (SIZE (1..maxNrofCSI-RS-ResourcesExt-r16)) OF INTEGER

                                                           (0..maxNrofCSI-RS-ResourcesAlt-1-r16) OPTIONAL,

        type1SP-eType2SP-r19                 SEQUENCE (SIZE (1..maxNrofCSI-RS-ResourcesExt-r16)) OF INTEGER

                                                           (0..maxNrofCSI-RS-ResourcesAlt-1-r16) OPTIONAL,

        type1SP-eType2SP-N4-r19              SEQUENCE (SIZE (1..maxNrofCSI-RS-ResourcesExt-r16)) OF INTEGER

                                                           (0..maxNrofCSI-RS-ResourcesAlt-1-r16) OPTIONAL,

        type1SP-N4-eType2SP-r19              SEQUENCE (SIZE (1..maxNrofCSI-RS-ResourcesExt-r16)) OF INTEGER

                                                           (0..maxNrofCSI-RS-ResourcesAlt-1-r16) OPTIONAL,

        type1SP-N4-eType2SP-N4-r19           SEQUENCE (SIZE (1..maxNrofCSI-RS-ResourcesExt-r16)) OF INTEGER

                                                           (0..maxNrofCSI-RS-ResourcesAlt-1-r16) OPTIONAL,

        eType2SP-eType2SP-N4-r19             SEQUENCE (SIZE (1..maxNrofCSI-RS-ResourcesExt-r16)) OF INTEGER

                                                           (0..maxNrofCSI-RS-ResourcesAlt-1-r16) OPTIONAL

    } OPTIONAL

}

 

CPU-PoolInfo-r19 ::= SEQUENCE {

    maxNumCPU-PerCC-r19      INTEGER (1..8),

    maxNumCPU-AllCC-r19      INTEGER (1..32)

}

Followings are short descriptions on important parameters

  • predictionConfiguration-r19 defines how the UE and the network coordinate AI-based prediction activity.
  • csi-InferencePrediction-r19 represents an inference mode in which the UE runs a pre-trained AI model, predicts CSI, and reports the result back to the gNB.
  • Monitoring configurations allow the network to compare predicted CSI with the actual channel condition, so it can evaluate prediction accuracy and monitor AI model performance over time.
  • CPU-PoolInfo-r19 defines the amount of processing resource that the UE can dedicate to AI and ML tasks.
  • maxNumCPU-AllCC-r19 indicates how much AI processing capacity the UE can share across multiple component carriers in carrier aggregation.
  • relaxationTimelineT-r19 tells the gNB how much extra processing time, measured in slots, the UE may need to complete AI inference for different subcarrier spacings.
  • CodebookParametersCSI-PredictionDoppler-r19 handles Doppler and high-mobility support for AI-based CSI prediction.
  • eType2DopplerR3R4-r19 indicates support for higher-rank transmission cases, such as rank 3 and rank 4, even when AI-based prediction is used.
  • X1 and X2 related parameters define the time window, in slots, that the AI model uses to analyze Doppler and frequency shift behavior.
  • scalingFactor-r19 defines the complexity or size of the AI model. A larger value usually means a larger or more precise model.
  • inferenceReportType-r19 defines how the UE reports AI-based results, such as by aperiodic or semi-persistent reporting.
  • supportedCSI-RS-ResourceList indicates which CSI-RS resources the UE’s AI model is capable of processing.
  • codebookComboParameter supports hybrid operation where some CSI reporting uses AI-based prediction while other parts still use legacy Type-I or Type-II codebook methods.

N4 Logic

N4 logic is the idea that the UE does not make an AI-based CSI prediction from only one instant of radio measurement. Instead, it can collect multiple CSI-RS observations over time and use them together as input to the AI model. In this sense, N4 represents the observation depth or the number of channel snapshots the UE buffers before running inference. When N4 is small, the model works with very limited history and the processing is simpler. When N4 is larger, the UE can observe channel evolution over multiple time points, which is especially useful for tracking Doppler behavior, fading trends, and other time-varying effects in high-mobility scenarios. At a high level, N4 defines how much past channel context the AI model uses in order to predict future channel behavior more accurately.

What is N4 for

  • N4 means the number of CSI-RS observations or channel snapshots that the UE uses as input for one AI-based CSI prediction.
  • It defines how much past channel history the UE buffers before running the AI inference.
  • The main purpose of N4 is to give the AI model enough time-domain context to predict future channel behavior more accurately.

Why N4 is Needed

  • If the UE uses only one observation, the model sees only the current channel condition.
  • If the UE uses multiple observations, the model can track how the channel changes over time.
  • This is especially important in high-mobility scenarios where Doppler and fading change quickly.

Small N4 vs Large N4

  • If N4 is small, the processing is simpler and the memory requirement is lower.
  • If N4 is large, the model has more historical context and can make a better time-series style prediction.
  • A larger N4 usually increases processing load, memory usage, and inference complexity.

High-Level Meaning

  • N4 is essentially the observation window size for AI-based CSI prediction.
  • It tells the network that the UE can buffer and process multiple CSI-RS resources before generating a prediction.
  • At a high level, N4 controls the tradeoff between prediction accuracy and implementation complexity.

Beam Management

While the previous section focused on predicting channel quality through CSI, this Beam Management section focuses on predicting the spatial direction of the signal. In Release 19, AI and ML are used to estimate which beam is likely to become the best beam in the near future, so the network can react before radio quality drops too much. This changes beam management from a reactive process into a more proactive one. Instead of waiting until beam quality degrades and then switching, the UE can use AI-based prediction to anticipate future beam behavior and help the gNB maintain more stable connectivity, especially in mobility scenarios where beam conditions can change very quickly.

MIMO-ParametersPerBand ::= SEQUENCE {

    aiml-CSI-PredictionDoppler-r19              CodebookParametersCSI-PredictionDoppler-r19 OPTIONAL,

 

    -- R1 58-0-1: CSI report framework for UE-side inference

    aiml-CSI-Report-r19                         SEQUENCE (SIZE (1..2)) OF CPU-PoolInfo-r19 OPTIONAL,

 

    -- R1 58-1-2: UE-side beam prediction for BM Case1 for inference

    aiml-BM-Case1-r19                           SEQUENCE {

        maxNumberOfInferenceReportPerBWP-r19    SEQUENCE {

            periodicReporting-r19               INTEGER (1..4),

            aperiodicReporting-r19              INTEGER (1..4),

            semiPersistentReporting-r19         INTEGER (1..4)

        },

        maxNumberOfInferenceReportAcrossAllCC-r19

                                               ENUMERATED { n1, n2, n3, n4, n8, n10, n12, n16 },

        maxNumberOfResourceSetB-r19            ENUMERATED { n4, n8, n16 },

        maxNumberOfResourceSetA-r19            ENUMERATED { n8, n16, n32, n64 },

        resourceTypeSetB-CSI-RS-r19            SEQUENCE {

            periodic-r19                       ENUMERATED { supported } OPTIONAL,

            aperiodic-r19                      ENUMERATED { supported } OPTIONAL,

            semiPersistent-r19                 ENUMERATED { supported } OPTIONAL

        },

        inferenceReportType-r19                SEQUENCE {

            periodic-r19                       ENUMERATED { supported } OPTIONAL,

            aperiodic-r19                      ENUMERATED { supported } OPTIONAL,

            semiPersistent-r19                 ENUMERATED { supported } OPTIONAL

        },

        subUseCases-r19                        ENUMERATED { subset, diffSet, both },

        maxNumberOfPredictedBeamPerReportingInstance-r19

                                               INTEGER (1..4),

        numberOfOccupiedCPU-r19                INTEGER (0..8),

        numberOfOccupiedCPUx-r19               INTEGER (0..8),

        relaxationTimelineD-r19                SEQUENCE {

            scs15kHz-r19                       ENUMERATED { n7, n14, n21, n28, n35, n42, n56 },

            scs30kHz-r19                       ENUMERATED { n14, n28, n42, n56, n70, n84, n112 },

            scs60kHz-r19                       ENUMERATED { n28, n56, n84, n112, n140, n168, n224 },

            scs120kHz-r19                      ENUMERATED { n56, n112, n168, n224, n280, n336, n448 },

            scs480kHz-r19                      ENUMERATED { n224, n448, n672, n896, n1120, n1344, n1792 },

            scs960kHz-r19                      ENUMERATED { n448, n896, n1344, n1792 }

        },

        relaxationTimelineD1-r19               SEQUENCE {

            scs15kHz-r19                       ENUMERATED { n7, n14, n21, n28, n35, n42, n56 },

            scs30kHz-r19                       ENUMERATED { n14, n28, n42, n56, n70, n84, n112 },

            scs60kHz-r19                       ENUMERATED { n28, n56, n84, n112, n140, n168, n224 },

            scs120kHz-r19                      ENUMERATED { n56, n112, n168, n224, n280, n336, n448 },

            scs480kHz-r19                      ENUMERATED { n224, n448, n672, n896, n1120, n1344, n1792 },

            scs960kHz-r19                      ENUMERATED { n448, n896, n1344, n1792 }

        },

        occupiedResourcePool-r19               INTEGER (1..2)

    } OPTIONAL,

 

    -- R1 58-1-3: UE-side beam prediction for BM Case1 with predicted RSRP for inference

    aiml-BM-Case1-PredictedRSRP-r19            INTEGER (1..4) OPTIONAL,

 

    -- R1 58-1-4: UE-side beam prediction for BM Case2 for inference

    aiml-BM-Case2-r19                          SEQUENCE {

        maxNumberOfInferenceReportPerBWP-r19   SEQUENCE {

            periodicReporting-r19              INTEGER (1..4),

            aperiodicReporting-r19             INTEGER (1..4),

            semiPersistentReporting-r19        INTEGER (1..4)

        },

        maxNumberOfInferenceReportAcrossAllCC-r19

                                               ENUMERATED { n1, n2, n3, n4, n8, n10, n12, n16 },

        maxNumberOfResourceSetB-r19            ENUMERATED { n4, n8, n16, n32, n64 },

        maxNumberOfResourceSetA-r19            ENUMERATED { n4, n8, n16, n32, n64 },

        minNumberOfKBM-SetB-r19                ENUMERATED { n2, n4, n8 },

        resourceTypeSetB-CSI-RS-r19            SEQUENCE {

            periodic-r19                       ENUMERATED { supported } OPTIONAL,

            semiPersistent-r19                 ENUMERATED { supported } OPTIONAL

        },

        inferenceReportType-r19                SEQUENCE {

            periodic-r19                       ENUMERATED { supported } OPTIONAL,

            aperiodic-r19                      ENUMERATED { supported } OPTIONAL,

            semiPersistent-r19                 ENUMERATED { supported } OPTIONAL

        },

        maxNumberOfPredictedBeamPerPerTimeInstance-r19

                                               INTEGER (1..4),

        maxNumberOfPredictedTimeInstance-r19   ENUMERATED { n1, n2, n4, n8 },

        maxTotalNumberOfPredictedBeamPerReport-r19

                                               ENUMERATED { n1, n2, n4, n6, n8, n12, n16, n32 },

        timeGap-r19                            ENUMERATED { ms10, ms20, ms40, ms80, ms160 },

        numberOfOccupiedCPU-r19                INTEGER (0..8),

        numberOfOccupiedCPUx-r19               INTEGER (0..8),

        relaxationTimelineD-r19                SEQUENCE {

            scs15kHz-r19                       ENUMERATED { n14, n28, n42, n56, n70, n84, n98 },

            scs30kHz-r19                       ENUMERATED { n28, n56, n84, n112, n140, n168, n196 },

            scs60kHz-r19                       ENUMERATED { n56, n112, n168, n224, n280, n336, n392 },

            scs120kHz-r19                      ENUMERATED { n112, n224, n336, n448 },

            scs480kHz-r19                      ENUMERATED { n448, n896, n1344, n1792 },

            scs960kHz-r19                      ENUMERATED { n896, n1792 }

        },

        relaxationTimelineD1-r19               SEQUENCE {

            scs15kHz-r19                       ENUMERATED { n14, n28, n42, n56, n70, n84, n98 },

            scs30kHz-r19                       ENUMERATED { n28, n56, n84, n112, n140, n168, n196 },

            scs60kHz-r19                       ENUMERATED { n56, n112, n168, n224, n280, n336, n392 },

            scs120kHz-r19                      ENUMERATED { n112, n224, n336, n448 },

            scs480kHz-r19                      ENUMERATED { n448, n896, n1344, n1792 },

            scs960kHz-r19                      ENUMERATED { n896, n1792 }

        },

        occupiedResourcePool-r19               INTEGER (1..2)

    } OPTIONAL,

 

    -- R1 58-1-5: UE-side beam prediction for BM-Case2 with predicted RSRP for inference

    aiml-BM-Case2-PredictedRSRP-r19            SEQUENCE {

        maxNumPredictedBeamPerInstance-r19     INTEGER (1..4),

        maxNumPredictedTime-r19                ENUMERATED { n1, n2, n4, n8 },

        maxTotalNumPredictedBeamInOneReport-r19

                                               ENUMERATED { n1, n2, n3, n4, n6, n8, n12, n16, n24, n32 }

    } OPTIONAL,

 

    -- R1 58-1-6: Performance monitoring for UE-sided model

    aiml-BM-Monitoring-r19                     SEQUENCE {

        maxNumTotalResource-r19                ENUMERATED { n4, n8, n16, n32, n64 },

        maxNumReportPerBWP-Periodic-r19        INTEGER (1..4),

        maxNumReportPerBWP-Aperiodic-r19       INTEGER (1..4),

        maxNumReportPerBWP-SP-r19              INTEGER (1..4),

        maxNumReportAcrossAllCC-r19            ENUMERATED { n1, n2, n4, n8 },

        maxNumOccasion-r19                     ENUMERATED { n1, n3, n7, n15 },

        monitoringResourceType-r19             ENUMERATED { periodic, semipersistent },

        monitoringReportType-r19               ENUMERATED { periodic, aperiodic, semipersistent }

    } OPTIONAL,

 

    -- R1 58-1-7: Data collection for UE-side beam prediction

    aiml-BM-UE-DataCollection-r19              SEQUENCE {

        subCase-r19                            ENUMERATED { equal, subset, notSubset },

        maxNumResourceSetB-r19                 ENUMERATED { n4, n8, n16, n32, n64 },

        maxNumResourceSetA-r19                 ENUMERATED { n8, n16, n32, n64 }

    } OPTIONAL,

 

    -- R1 58-3-1: CSI prediction for UE-sided inference when N4=1

    aiml-CSI-Prediction-r19                    SEQUENCE {

        supportedCSI-RS-ResourceList-r19       SEQUENCE (SIZE (1..maxNrofCSI-RS-ResourcesExt-r16)) OF INTEGER

                                                   (0..maxNrofCSI-RS-ResourcesAlt-1-r16),

        scalingFactor-r19                      ENUMERATED { n1, n2, n4 },

        numberOfOccupiedCPU-r19                INTEGER (0..8),

        numberOfOccupiedCPUx-r19               INTEGER (0..8),

        relaxationTimelineT-r19                SEQUENCE {

            scs15kHz-r19                       ENUMERATED { n14, n28, n56, n112 },

            scs30kHz-r19                       ENUMERATED { n28, n56, n112, n224 },

            scs60kHz-r19                       ENUMERATED { n56, n112, n224, n448 },

            scs120kHz-r19                      ENUMERATED { n112, n224, n448 },

            scs480kHz-r19                      ENUMERATED { n448, n896, n1792 },

            scs960kHz-r19                      ENUMERATED { n896, n1792 }

        },

        occupiedResourcePool-r19               INTEGER (1..2),

        inferenceReportType-r19                SEQUENCE {

            aperiodic-r19                      ENUMERATED { supported },

            semiPersistent-r19                 ENUMERATED { supported }

        }

    } OPTIONAL,

 

    -- R1 58-3-1a-1: DD unit size when A-CSI-RS is configured for CMR N4>1 for UE side inference of CSI prediction

    aiml-CSI-PredictionUnitDurationDD-r19      ENUMERATED { supported } OPTIONAL,

 

    -- R1 58-3-2: CSI prediction for UE-sided inference when N4>1

    aiml-CSI-PredictionN4-r19                  SEQUENCE {

        supportedCSI-RS-ReportSettingAcrossAllCC-r19

                                               SEQUENCE (SIZE (1..maxNrofCSI-RS-ResourcesExt-r16)) OF

                                                   SupportedCSI-RS-ReportSetting-r18,

        supportedCSI-RS-ReportSettingAcrossOneReport-r19

                                               SEQUENCE (SIZE (1..maxNrofCSI-RS-ResourcesExt-r16)) OF

                                                   SupportedCSI-RS-ReportSetting-r18,

        numOccupiedCPU-r19                     INTEGER (0..8),

        numOccupiedCPUx-r19                    INTEGER (0..8),

        occupiedPool-r19                       INTEGER (1..2)

    } OPTIONAL,

 

    -- R1 58-3-4: UE side data collection for CSI prediction

    aiml-CSI-PredictionUE-DataCollection-r19   ENUMERATED { supported } OPTIONAL,

 

    -- R1 58-3-5: Performance monitoring for CSI prediction model

    aiml-CSI-PredictionMonitoring-r19          SEQUENCE {

        supportedCSI-RS-ResourceList-r19       SEQUENCE (SIZE (1..maxNrofCSI-RS-ResourcesExt-r16)) OF INTEGER

                                                   (0..maxNrofCSI-RS-ResourcesAlt-1-r16),

        numOccupiedCPU-r19                     INTEGER (1..2)

    } OPTIONAL

}

 

CodebookParametersCSI-PredictionDoppler-r19 ::= SEQUENCE {

    -- R1 58-3-1b: Maximum number of aperiodic CSI-RS resources that can be configured in the same CSI report setting for Rel-16-based

    -- doppler measurement for UE side inference of CSI prediction

    maxNumberOfAperiodic-CSI-RS-Resource-r19      ENUMERATED { n4, n8, n12 } OPTIONAL,

 

    -- R1 58-3-1-2: Support R=2 for Rel-16-based doppler codebook for UE side inference of CSI prediction

    eType2DopplerR2-r19                           SEQUENCE (SIZE (1..maxNrofCSI-RS-ResourcesExt-r16)) OF INTEGER

                                                      (0..maxNrofCSI-RS-ResourcesAlt-1-r16) OPTIONAL,

 

    -- R1 58-3-1-3: Support X=1 based on first and last slot of WCSI, for Rel-16-based doppler codebook for UE side inference of CSI prediction

    eType2DopplerX1-r19                           ENUMERATED { supported } OPTIONAL,

 

    -- R1 58-3-1-3a: Support X=2 CQI based on 2 slots for Rel-16-based doppler codebook for UE side inference of CSI prediction

    eType2DopplerX2-r19                           ENUMERATED { supported } OPTIONAL,

 

    -- R1 58-3-1-4: Support of l = (n – nCSI,ref) for CSI reference slot for Rel-16 based doppler codebook for UE side inference of CSI prediction

    eType2DopplerL-N4D1-r19                       ENUMERATED { supported } OPTIONAL,

 

    -- R1 58-3-1-5: Support of L=6 for Rel-16 based doppler codebook for UE side inference of CSI prediction

    eType2DopplerL6-r19                           ENUMERATED { supported } OPTIONAL,

 

    -- R1 58-3-1-6: Support of rank equals 3 and 4 for Rel-16 based doppler codebook for UE side inference of CSI prediction

    eType2DopplerR3R4-r19                         ENUMERATED { supported } OPTIONAL,

 

    -- R1 58-3-1-7: Active CSI-RS resources and ports for mixed R16 based doppler codebook for CSI prediction via UE side model with

    -- other codebooks in any slot

    codebookComboParameterMixedTypePrediction-r19 SEQUENCE {

        type1SP-Type1SP-N4-r19                    SEQUENCE (SIZE (1..maxNrofCSI-RS-ResourcesExt-r16)) OF INTEGER

                                                      (0..maxNrofCSI-RS-ResourcesAlt-1-r16) OPTIONAL,

        type1SP-eType2SP-r19                      SEQUENCE (SIZE (1..maxNrofCSI-RS-ResourcesExt-r16)) OF INTEGER

                                                      (0..maxNrofCSI-RS-ResourcesAlt-1-r16) OPTIONAL,

        type1SP-eType2SP-N4-r19                   SEQUENCE (SIZE (1..maxNrofCSI-RS-ResourcesExt-r16)) OF INTEGER

                                                      (0..maxNrofCSI-RS-ResourcesAlt-1-r16) OPTIONAL,

        type1SP-N4-eType2SP-r19                   SEQUENCE (SIZE (1..maxNrofCSI-RS-ResourcesExt-r16)) OF INTEGER

                                                      (0..maxNrofCSI-RS-ResourcesAlt-1-r16) OPTIONAL,

        type1SP-N4-eType2SP-N4-r19                SEQUENCE (SIZE (1..maxNrofCSI-RS-ResourcesExt-r16)) OF INTEGER

                                                      (0..maxNrofCSI-RS-ResourcesAlt-1-r16) OPTIONAL,

        eType2SP-eType2SP-N4-r19                  SEQUENCE (SIZE (1..maxNrofCSI-RS-ResourcesExt-r16)) OF INTEGER

                                                      (0..maxNrofCSI-RS-ResourcesAlt-1-r16) OPTIONAL

    } OPTIONAL

}

 

CPU-PoolInfo-r19 ::= SEQUENCE {

    maxNumCPU-PerCC-r19      INTEGER (1..8),

    maxNumCPU-AllCC-r19      INTEGER (1..32)

}

Followings are short descriptions on important parameters

  • maxNumPredictedBeam-r19: Indicates how many future candidate beams the UE can predict and report in a single AI-based Beam Management report.
  • maxNumberOfPredictedTimeInstance-r19: Indicates how many future time points the UE can predict, especially for Case 2 temporal beam prediction.
  • timeGap-r19: Defines the spacing between future prediction points. This tells the network how far apart the predicted beam states are in time.
  • maxNumberOfResourceSetA-r19: Indicates how many measured beam resource sets the UE can use as observation input to the AI model.
  • maxNumberOfResourceSetB-r19: Indicates how many target beam resource sets the UE can predict as output of the AI model.
  • aiml-BM-Case1-PredictedRSRP-r19: Indicates support for reporting predicted RSRP rather than only measured RSRP for beam management.
  • maxNumOccasion-r19: Defines how many monitoring occasions can be used when the network evaluates prediction accuracy over time.
  • relaxationTimelineD-r19: Indicates the extra processing time the UE may need to complete beam prediction inference and prepare the report.
  • relaxationTimelineD1-r19: Indicates additional processing time needed for more complex operations such as concurrent data collection or advanced multi-beam handling.
  • occupiedResourcePool-r19: Identifies which internal UE processing or hardware resource pool is assigned to the beam management task.
  • subset / diffSet related parameters: Indicate the relationship between the measured beam set and the predicted beam set, such as whether prediction is made within the same set or across different sets.
  • aiml-BM-Case1-r19: Represents support for AI-based beam prediction for the current or immediate next beam decision.
  • aiml-BM-Case2-r19: Represents support for more advanced temporal prediction where the UE predicts a sequence of future best beams over multiple time instances.

AI-BM Case 1 vs. Case 2

Case 1 and Case 2 are the two main AI-based Beam Management approaches in Release 19, but they emphasize different dimensions of prediction. Case 1 is more of a spatial prediction problem. It focuses on identifying the best beam among candidate beams for the current moment or the immediate next moment based on the current observation. In other words, it is mainly about selecting the best direction in space. Case 2 extends this idea into the temporal domain. It predicts how the best beam will change over multiple future time instances, so the network can anticipate beam evolution ahead of time. In simple terms, Case 1 is mainly about spatial beam selection, while Case 2 is about time-evolving beam prediction.

  • Case 1 refers to intra-unit prediction(Spatial prediction). The UE predicts the best beam for the current time instance or the immediate next time instance based on current measurements.
  • Case 2 refers to temporal prediction. The UE predicts a sequence of future best beams across multiple future time instances.
  • maxNumberOfPredictedTimeInstance-r19 indicates how many future time points the UE can predict.
  • timeGap-r19 defines the spacing between those future prediction points.
  • Case 1 is mainly about immediate refinement and accuracy, while Case 2 is about proactive beam tracking over time.

Resource Set A and Resource Set B

Resource Set A and Resource Set B define the basic input-output structure of AI-based Beam Management. One set represents what the UE actually measures from the radio environment, and the other set represents what the UE tries to predict with AI. In high-level terms, this allows the UE to observe a limited number of real reference signals and then use those observations to estimate the quality of a larger or different set of candidate beams. This structure is important because the UE cannot always measure every possible beam directly in real time. Instead, it measures a practical observation set and uses AI to infer the expected quality or best beam choice for the prediction set.

  • maxNumberOfResourceSetA-r19 indicates the number of measured resource sets that can be used as AI model input.
  • maxNumberOfResourceSetB-r19 indicates the number of target resource sets whose beam quality the AI model can predict.
  • Resource Set A is the observation set. These are the SSB or CSI-RS resources that the UE actually measures.
  • Resource Set B is the prediction set. These are the beams or resources whose future quality the UE tries to predict.
  • subset means the predicted beams in Set B are a smaller group within the measured beams in Set A.
  • diffSet means the predicted beams are different from the measured beams, such as using measurements on one frequency to predict beams on another frequency.

In summary :

Feature

Set B (Input)

Set A (Output)

Role

Observation Set: What the UE actually measures in the physical world.

Prediction Set: The full list of potential beams the network wants to evaluate.

Quantity

Small (e.g., 8 beams)

Large (e.g., 64 beams)

UE Task

Performs L1-RSRP measurements on these resources.

Runs AI inference to predict the RSRP or "Best Beam ID" for this set.

Predicted RSRP

Predicted RSRP is one of the key ideas that makes AI-based Beam Management different from legacy beam reporting. In the legacy approach, the UE can only report the signal power that it has already measured. In the AI-based approach, the UE can go one step further and report the signal power it expects to see in the near future for a candidate beam. Through aiml-BM-Case1-PredictedRSRP-r19, the UE provides the gNB with a forward-looking estimate rather than only a backward-looking measurement. This allows the network to make proactive beam decisions before signal quality actually degrades, which can improve beam stability and reduce the risk of sudden beam failure.

  • In legacy Beam Management, the UE reports the RSRP that it has already measured.
  • In AI-based Beam Management, the UE can report Predicted RSRP through aiml-BM-Case1-PredictedRSRP-r19.
  • This means the UE reports the expected beam power before actually reaching that beam condition.
  • This allows the gNB to make proactive beam decisions instead of waiting for signal quality to drop.

Relaxation Timeline D and D1

Relaxation Timeline D and D1 define the extra processing time that the UE may need when performing AI-based Beam Management. Unlike conventional beam reporting, AI-based prediction requires additional computation for inference, report generation, and sometimes simultaneous data collection or more complex multi-beam processing. relaxationTimelineD represents the extra time needed for the basic inference and reporting procedure, while relaxationTimelineD1 represents additional time that may be required for more advanced or parallel operations. At a high level, these parameters allow the UE to inform the network that AI processing cannot always fit into the same timing budget as legacy beam management, especially when NPU or GPU based computation is involved.

  • relaxationTimelineD indicates the extra time the UE needs to run AI inference and prepare the Beam Management report.
  • relaxationTimelineD1 indicates additional time that may be needed for more complex operations, such as concurrent data collection or multi-beam processing.
  • These parameters are important because AI-based Beam Management requires more processing time than conventional beam reporting.
  • They allow the UE to signal that extra time is needed for NPU or GPU based processing.

 

Reference