It is well knwon that AI/ML will be one of main pillars (enablers) in 6G technology. But exactly for which application and use cases it will be used is not clearly defined. So until the details are defined in 3GPP, I would just try to consolidate various possible use cases and models as much as possible.

Categories and Use Cases

Different models are picked because they're really good at certain jobs but may not necessarily good at other jobs. The table you see next breaks it all down. I've grouped models based on what they mainly do, like sorting things (Classification), guessing future data (Prediction), or Excersize action with decision making (Action/Control). For every group, the table lists the ML models and gives an example of how they're used in today's communication tech. In short, this table shows just how big a role ML plays in making our gadgets work better and smarter.


ML Model

Use Case

Classification LSTM LiDAR Aided Human Blockage Prediction
Classification GNN Access point selection in cell-free massive MIMO systems
Prediction LASSO AI based compressed sensing for beam selection in D-MIMO
Prediction RNN, LSTM AI/ML-based predictive orchestration
Action/Control CNN, RL Constellation shaping
Action/Control FL Distributed AI for automated UPF scaling in low-latency network slices


What kind of specific applications will the AI/ML will be used for ? In other words, what will be the use cases for the AI/ML in 6G ?

Here's a simple table to break it down for you. It lists out specific situations or "Use Cases" where ML is applied. Next to each situation, there's a brief "Description" of what's going on. "ML Model" indicates typical ML model employed for the use case.

This table is the summary of use cases based on Hera-X : Deliverable D4.2

Use Case


ML Model

LiDAR Aided Human Blockage Prediction Monitor indoor activity and predict dynamic human blockages, aimed at improving the reliability of the link, reducing link failures while supporting mobility and adapting to the dynamics of the environment. LSTM

Access point selection in cell-free massive MIMO systems

Predict candidate Access Points (APs) based on limited measurements, improving latency and achieving mobility support targets.


AI based compressed sensing for beam selection in D-MIMO

Beam selection in D-MIMO" . The technique leverages the sparsity of radio propagation in multipath channels, enabling fewer measurements for beam selection. The Access Points (APs) transmit the same reference signal in all directions, and the best beam direction and corresponding channel gains are determined using compressed sensing computation.


Constellation shaping

Removes the need for transmitting pilot symbols within the waveform, as the geometric shape of the complex-valued constellation is learned


ML aided channel (de)coding for constrained devices

Enhance channel coding, particularly in the short block length regime.


DL for location-based beamforming

Learn the location/precoder mapping, overcoming limitations of existing LBB methods that assume the existence of a Line of Sight (LoS) path.


ML aided beam management

Allows a newly activated AP to start serving users immediately, using information from neighbouring APs to select the appropriate beam


TX-side CNN for reducing PA-induced out-of-band emissions

Reduce out-of-band emissions caused by a nonlinear power amplifier (PA).


ML/AI empowered receiver for PA non-linearity compensation

Compensate for the non-linearities of Power Amplifiers (PAs) at the receive. This approach allows for higher distortions to be tolerated, reducing the PA output backoff, leading to higher output power and more energy-efficient operation of the transmitters.

AI/ML Demapper

AI/ML-based predictive orchestration

Predict future demands and arrange resource provisioning in advance.  Reduce service instantiation time and provide resilience for potential hazards that could emerge from connectivity failure


Distributed AI for automated UPF scaling in low-latency network slices

trigger preemptive auto-scaling of local UPFs at the network edge. This is achieved by deploying a distributed Network Data Analytics Function (NWDAF) instance in a strategic location that can autonomously monitor the network in the target geographical area. The AI agent, deployed in distributed edge computing resources, processes data from the NWDAF, applies the AI algorithm and models, and controls the respective UPF scaling.


Followings are the acronyms used in the table

  •   CNN : Convolutional Neural Network
  •   RNN : Reccurrent Neural Network
  •   GNN : Graph Neural Network
  •   LSTM : Long Short Term Memory
  •   MLP : Multi Layer Perceptron
  •   RL : Reinforcement Learning
  •   FL : Federated Learning
  •   DCB : Deep Contextual Bandit
  •   ML : Machine Learning
  •   DL : Deep Learning

Questions to be answered

  •   At which stages along the overall communication path are to be associated with AI/ML ?  (e.g, UE vs Network, Higher Layer vs Low (PHY, MAC) etc).
  •   Which part of the AI/ML will be specified in 3GPP and which part would be up to the vendors ?
  •   What kind of DL/ML model will be used ?
  •   When we decided to apply AI/ML in the process, are we going to use AI/ML as a complete replacement of conventional method ? or as an optional alternatives to the existing process ? For example, if we decide to use AI/ML for channel estimation, are we going to use AI/ML based channel estimation only and get rid of conventional method like MMSE ?

Learn from YouTube and AI

With the widespread of AI since early 2023, I have tried a little bit of new approach of study/learning utilizing various AI solutions. In this section, I am trying to pick up some of the YouTube materials that looks informative (at least) to me. The contents that I am sharing in this section is created as follows.

  •   Watch the full contents of YouTube material myself
    •   NOTE : This is essential since there are a lot of visual material that cannot be shared by the summary and also some details not captured by summary. If you skip this step, nothing would go through your brain... it would just go through YouTube and directly through AI. Then, AI would learn but you would not :)
    •   NOTE : I usually pick materials with more of specific technical informations and use cases rather than high level overview.
  •   Get the transcript from YouTube (As of 2023, YouTube provide the built-in function to generate the transcript for the video)
  •   Copy the transcribe, save it into a text file. Paste the text file into chatGPT (GPT 4) and requested summary (NOTE : If you do not subscribe chatGPT paid version, you may try it with claude ai.)


The Road Towards an AI-Native Air Interface for 6G | AI/ML IN 5G CHALLENGE

The talk delivered by Jacob Hodis, the head of the research department at Nokia Bell Labs in France, focused on exploring machine learning applications for the physical and media access control layers in communication systems. He shared a vision for a 6G system, where machine learning plays a pivotal role in designing the network, particularly the AI-native air interface. The discussion covered the potential of creating virtual digital twin worlds, augmenting intelligence, and controlling robots, all on a global scale through 6G. Key technologies necessary for 6G were highlighted, including AI optimization, spectral sharing, and new security paradigms. Jacob also touched on the shift from traditional design approaches to data-driven machine learning techniques for enhancing communication systems. He emphasized the importance of differentiable transceiver algorithms and the potential for learning custom waveforms and signaling schemes. The talk concluded with a Q&A session discussing the impact on standards, real-time processing, data generation for simulations, and the nature of training for these advanced systems.

  •   The telecommunications industry foresees challenges in finding enough spectrum to accommodate increasing data rates, necessitating new scheduling algorithms and multiplexing waveforms.
  •   The evolution of architecture is aimed at increased flexibility and specialization to cater to new verticals.
  •   Discussions around a new security and trust paradigm are emerging, especially for private networks where jamming and information sharing become concerns.
  •   The speaker focuses on the AI interface in this talk, detailing the role of machine learning in enhancing or replacing components in 5G.
  •   The potential for 6G includes:
  •   Learning bespoke waveforms and constellations to utilize spectrum more efficiently.
  •   Learning custom signaling and access schemes that can switch between different access methods as needed.
  •   Adapting to hardware non-linearities and optimizing parameters AI can manage better than humans.
  •   There is an emphasis on the need for hardware acceleration to support larger neural networks and a machine learning-first approach.
  •   The transition to AI-designed components requires new signaling and procedures to allow distributed end-to-end training.
  •   The speaker highlights the importance of differentiable transceiver algorithms that do not require block-wise ground truths for training, optimizing for end-to-end performance.
  •   The binary cross-entropy loss function used in machine learning is suitable for communication systems as it aligns with maximizing achievable rates with practical decoders.
  •   A case study is presented, ranging from neural receivers to pilotless transmissions, showing how machine learning can significantly improve physical layer communication.
  •   The talk delves into the idea of an AI-native air interface, which can lead to substantial throughput improvements and eliminate the management of pilot placements.
  •   Future research topics include:
    •   Extending end-to-end learning for custom waveforms and joint communication-sensing purposes.
    •   Application-specific end-to-end learning and semantic communications.
    •   Challenges of decentralized and federated learning.
  •   The speaker suggests the next frontier is extending end-to-end learning to higher layers, potentially leading to machine-learned protocols.
  •   Standardization efforts are discussed, questioning whether learned configurations or the procedures to achieve them should be standardized.
  •   Real data from commercial products is not widely shared with the academic community, but it is recognized as necessary for advancing research.
  •   Training with machine learning is generally offline, utilizing simulated data, and the speaker is skeptical of the practicality of real-time online training for physical layer tasks.
  •   The importance of efficient hardware accelerators tailored for communication-specific tasks is emphasized to support physical layer workloads.
  •   Details on Q&A (NOTE : In this presentation, I found Q&A session very informative and I asked chatGPT to give me more detailed summary of the Q&A session)
    • Transition to Data-Driven Methods: The future is expected to be data-driven, with expert knowledge integrated to improve training efficiency and reduce complexity. There's a potential shift from model-based approaches as data-driven methods can leverage data more efficiently, especially if hardware acceleration reduces the cost and energy consumption.
    • Impact on Standards: Currently, the research is between stage one and two of machine learning integration into physical layer design. Commercial products lack hardware accelerators for layer one machine learning tasks, which limits the replacement of physical layer components with neural networks. The impact on standards is minimal for now, as stages one and two do not require new signaling; the changes are transparent to existing standards. However, more freedom in designing physical layer components would require new forms of signaling for training and metadata exchange.
    • Complexity from Mobility: The approach can handle different channel models and Doppler spreads, showing potential resilience to conditions caused by mobility. Training over various conditions means a single optimized system can be used across different scenarios without constant retraining.
    • Real-Time Processing: The processing required for these neural network-based systems is extremely fast, measured in nano to microseconds. Current hardware accelerators are not targeted towards layer one loads, and there's a need for communication-specific accelerators that can support the necessary speeds for real-time processing.
    • Simulation Data for Training: Most research is based on simulations using 3GPP models. Access to real data is limited and not widely shared with the academic community. There's a need for large datasets of physical layer data from commercial products to advance the research, which is a challenge due to proprietary concerns and the practical difficulties of collecting such data.
    • Offline vs. Real-Time Data: Offline training is considered sufficient and more pragmatic for these applications. Real-time online adaptation and training are not viewed as practical for deployment within the distributed units of a 5G system.
    • Modulation Scheme and Loss Function: The loss function is adaptable to the number of bits transmitted and can adjust to different modulation schemes. Training can be specific to modulation size or general, with the ability to adapt to various sizes. It's possible to train a single neural receiver for different modulation schemes by providing additional inputs or masking unnecessary outputs.
    • Overhead vs. Gains: The value of training overhead depends on the lifetime of a trained model. If constant retraining is required due to never converging to a stable model, the overhead is too high. Optimization might be more effective for specific scenarios, like high-speed vs. walking-speed users. The approach could be particularly beneficial for quasi-static networks like wireless backhaul links.
    • Simulations and Types of Data: Training is mainly done with simulated data, but real data from commercial products is also used, albeit less frequently due to proprietary restrictions. The type of data used for training is essential, and real measured data could reveal further performance gains.


AI for 5G Advanced toward 6G

The transcript discusses various aspects of AI in 5G evolution towards 6G, focusing on AI's role in enhancing network performance and addressing new use cases in mobile communication. It covers the integration of AI in network design, the importance of data-driven architecture, and the collaboration between academia, industry, and standardization bodies for a globally aligned approach towards 6G. The presentation emphasizes the potential of AI in improving network management, user experience, and system performance in telecommunication networks. Key points include the exploration of AI-enabled radio access networks, the study of functional frameworks for AI in telecommunication, and the challenges and opportunities in integrating AI into existing mobile network architectures.

  • The potential for AI to optimize network management, enhance user experience, and address complex design issues in advanced 5G features.
  • The significance of AI in future non-terrestrial networks was acknowledged, with interest from both academia and industry.
  • Challenges in interoperability and the implementation of AI algorithms in multi-vendor environments were highlighted.
  • The importance of maintaining user privacy and proprietary information when transferring AI models between network entities was discussed.
  • There was emphasis on the need for a standardized approach to AI integration for global scalability and innovation.
  •   AI's potential to optimize network management and user experience in 5G and 6G.
  •   The role of 3GPP in standardizing AI applications within mobile networks.
  •   Challenges in implementing AI across multi-vendor environments and ensuring interoperability.
  •   The significance of AI in enhancing network energy saving, load balancing, and mobility optimization.
  •   The importance of user data privacy and anonymization in AI/ML operations.
  •   Discussion on AI-enabled air interface for 5G evolution and the potential benefits of augmenting this interface.
  •   The need for a common AI framework in 6G and the importance of collaboration in the journey towards 6G.
  •   Challenges and methodologies in verifying AI algorithm performance, particularly in varied scenarios.
  •   The role of AI in non-terrestrial networks and its potential in enhancing network performance and capabilities.
  •   NVIDIA related solutions
    • Base Plane Functionality in 5G:
      • The base plane functionality in 5G is purely software-defined and implemented using a GPU combined with CUDA.
      • This approach is part of enterprise solutions, highlighting the shift towards software-driven implementations in 5G networks.
    • Hardware and Software Framework:
      • The hardware basis for these solutions is the Nvidia Converge Accelerator.
      • This hardware combines the computational power of the GPU with network acceleration and the security benefits of a CPU in a single high-performance package.
      • A media unified compute framework is present at the top layer, integrating various application frameworks with communication connectivity services.
    • Signal Processing on the GPU:
      • Signal processing tasks, specifically for the Physical Downlink Shared Channel (PDSCH) and the Physical Uplink Control Channel (PUCCH), are performed on the GPU using a method referred to as "cool file."
      • This method suggests that all the air interface signal processing, including channels like PDSCH and PUCCH, is done using GPU-based implementations.


Dec 2023 Webinar - AI-native Networks (Telco Assessment)

NOTE : For downloading presentation material, check this out in linkedIn

In the telecommunications sector, focusing on the transformation of service providers into digital entities and adopting AI as a core component of their services. The talk highlights how AI is dispersed across various domains like networking, IT, OSS, BSS, billing, sales, and marketing. It also addresses the challenge of scattered AI initiatives lacking a cohesive architecture. The speaker emphasizes the importance of AI in competing with digital service providers like Google and Amazon, and discusses the need for a solid reference architecture for AI in telecommunications.

Key points from the transcript include:

  •   Digital Transformation in Telecom: Telecommunication companies are transitioning into digital service providers, competing with companies like Google and Amazon.
  •   Role of AI and ML: AI and ML are key components in this transformation, yet they are often implemented in scattered and uncoordinated ways across various domains.
    • Scattered Implementation: AI and ML are often implemented in a dispersed manner across various domains without a cohesive strategy.
    • Lack of Unified Architecture: There is a noticeable absence of a unified reference architecture for AI in the telecommunications industry.
    • Impact on Different Domains: AI and ML technologies impact a range of areas within telecom companies, including network operations, customer service, and marketing strategies.
    • Need for Strategic Alignment: Emphasizing the necessity for these technologies to align with the overall business strategy of telecom companies.
    • Potential for Enhanced Services: AI and ML offer potential to significantly enhance various telecom services and operations.
  •   Challenges in AI Integration: The lack of a unified reference architecture for AI and scattered initiatives across different domains pose challenges.
    • Fragmented AI Initiatives: Difficulty arises from AI and ML initiatives being implemented in a fragmented manner across different business units.
    • Lack of Standardization: There's a challenge due to the lack of standard approaches and methodologies in AI and ML implementation.
    • Integration with Existing Systems: Integrating AI and ML with existing legacy systems and processes in telecom companies presents significant challenges.
    • Scalability Issues: Problems related to scaling AI solutions effectively across the entire organization.
    • Talent and Skill Gaps: The telecom industry faces challenges in acquiring and nurturing the right talent and skills needed for effective AI integration.
  •   AI Classifications: Discussion on various types of AI, including narrow AI, general AI, and super AI, with a focus on their applications in telecom.
    • Narrow AI: This type focuses on specific tasks and is the most commonly used AI in telecom for applications like customer service chatbots and network optimization.
    • General AI: General AI, capable of performing any intellectual task that a human can do, is still in the theoretical stage and not yet applied in the telecom industry.
    • Super AI: Super AI, which surpasses human intelligence, is also theoretical at this point and not yet a reality in the telecom context.
  •   AI Adoption in Telecom Domains: Examining how AI and ML are being used in different areas like network, IT, OSS, BSS, billing, sales, marketing, and customer service.
    • Network Operations: AI is used for optimizing network performance and predictive maintenance.
    • Customer Service: Chatbots and virtual assistants powered by AI enhance customer interaction and support.
    • Sales and Marketing: AI-driven analytics are employed for personalized marketing and sales strategies.
    • Billing and OSS/BSS: AI aids in automating and improving billing processes and operations support systems/business support systems.
    • IT Infrastructure: Utilization of AI in managing and optimizing IT infrastructure
  •   Convergence with Other Technologies: Exploration of how AI in telecom converges with other technologies, particularly cloud-native technologies.
    • Cloud-Native Technologies: AI in telecom is increasingly integrating with cloud-native technologies for enhanced scalability and flexibility.
    • Internet of Things (IoT): AI is being used alongside IoT to manage and analyze the vast data generated by connected devices.
    • 5G and Beyond: The evolution of 5G networks is closely tied to AI advancements, enabling smarter and more efficient network management.
    • Edge Computing: AI's role in edge computing for faster processing and reduced latency in telecommunications networks.
  •   Future Expectations: Anticipation of future trends, including the evolution towards AI-native networks and the role of 6G in advancing AI integration.
    • AI-Native Networks: Expectation of a shift towards networks that are inherently designed around AI capabilities.
    • Role of 6G: Anticipation of 6G technology playing a significant role in further advancing AI integration in telecom.
    • Advanced Data Analytics: Continued growth in the use of AI for more sophisticated data analytics.
    • Enhanced Customer Experience: AI is expected to significantly improve the overall customer experience in telecommunications.
  •   MLOps and AI Implementation: Discussion on Machine Learning Operations (MLOps) and best practices for implementing AI and ML models efficiently in telecom organizations.
    • Best Practices for Implementation: Emphasizes the importance of adopting best practices in the deployment and management of AI and ML models.
    • MLOps Frameworks: The significance of Machine Learning Operations (MLOps) frameworks in ensuring efficient and effective implementation.
    • Continuous Learning and Adaptation: Highlighting the need for continuous learning and adaptation in AI models to remain effective.
    • Integration Challenges: Discusses the challenges in integrating AI and ML models with existing telecom infrastructure and systems.
  •   Strategic Considerations: Emphasis on the need for telecom operators to include AI as part of a broader transformation strategy and not just as a standalone technology.
    • AI as Part of Broader Strategy: Stresses that AI should be a part of the overall business transformation strategy, not just a standalone technology.
    • Long-term Vision: Emphasizes the importance of having a long-term vision for AI integration within the company.
    • Stakeholder Collaboration: Highlights the need for collaboration among various stakeholders for successful AI implementation.
    • Regulatory and Ethical Considerations: Addresses the need to consider regulatory and ethical implications in AI strategies.
  •   Challenges and Recommendations: Addressing challenges like technology fragmentation, the need for standard architectures, successful pilot programs, and collaboration between various stakeholders in the telecom industry.
    • Technology Fragmentation: Addressing the issue of fragmented technology landscapes within telecom companies.
    • Standard Architectures Need: Recommends the development of standard architectures for AI integration.
    • Pilot Program Successes: The importance of creating successful pilot programs as a step towards larger implementation.
    • Industry Collaboration: Emphasizes the need for increased collaboration within the industry to tackle AI integration challenges.