Skip to content

GutoL/SFC_RL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 

Repository files navigation

Reinforcement Learning for Service Function Chain (SFC) Placement

This repository hosts a comprehensive solution employing reinforcement learning (RL) techniques for optimizing Service Function Chain (SFC) placement in dynamic network environments.

Features:

  • Reinforcement Learning Solution: The repository contains Jupyter notebooks presenting a robust RL solution tailored for SFC placement optimization. RL algorithms are utilized to dynamically adapt SFC configurations based on changing network conditions and demands.

  • Python Simulator Development: A Python-based simulator has been meticulously crafted to emulate the arrival of SFC requests and to simulate potential failures or repair events within the underlying infrastructure. This simulator serves as a crucial tool for testing and validating the RL-based placement strategies.

  • OpenAI Gym Environment: Leveraging the flexibility and extensibility of the OpenAI Gym framework, an adaptable environment has been constructed to facilitate RL experimentation and training for SFC placement scenarios.

  • Stable Baselines Library Integration: The solution harnesses the capabilities of the Stable Baselines library, a powerful toolkit for RL model development and training. By integrating Stable Baselines, developers can efficiently design and optimize RL agents for SFC placement tasks.

Key Components:

  • Notebook: The notebook within this repository serves as a comprehensive guide, detailing the implementation and evaluation of the RL-based SFC placement solution. It provides insights into the underlying algorithms, simulation methodologies, and experimental results.

  • Simulator Codebase: The Python codebase encapsulating the SFC request arrival simulation and infrastructure event modeling is included, offering a flexible and customizable framework for scenario-based testing and analysis.

  • RL Agent Implementations: Various RL agents tailored for SFC placement optimization are developed using Stable Baselines. These agents are trained and evaluated within the provided environment to ascertain their effectiveness in real-world deployment scenarios.

Cite our work

If you use this project in your work, please consider citing our article:

@article{santos2021availability,
  title={Availability-aware and energy-aware dynamic SFC placement using reinforcement learning},
  author={Santos, Guto Leoni and Lynn, Theo and Kelner, Judith and Endo, Patricia Takako},
  journal={The Journal of Supercomputing},
  pages={1--30},
  year={2021},
  publisher={Springer}
}
Santos, G. L., Lynn, T., Kelner, J., & Endo, P. T. (2021). Availability-aware and energy-aware dynamic SFC placement using reinforcement learning. The Journal of Supercomputing, 1-30.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published