Skip to content

Sunport15/Aerothon1

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 

Repository files navigation

Hi, This is CodeFellas 👋

👨🏻‍💻  Problem Statement

Problem Statement A washing machine manufacturing company makes a lot of washing machines every year. There are different models and types. And for each type of washing machine, the technology and logistics are different and these items are being manufactured in different parts of the world. All of this is aligned in a supply chain management process with the targeted dates of delivery planned for next 5 years. As this is a complex process, each department produces a lot of data related to the logistics, supply chain, planning, execution and forecasting of orders and other details. This often means, not only the department that owns a certain type of data produces it, but also the other departments who are either the direct or indirect consumers of the data also produce a possible forecast data. These departments produce the forecast data to keep up with the planning and their day to day activities, instead of waiting for it when it finally reaches them. At the same time, the data owning departments also keep updating the data based on everyday changes, which is an overhead for the consuming teams who have already consumed it and now need to recalibrate their data. Also, all this data is finally consolidated when the official manufacturing process has achieved them(real-time data). The intermediate process ends up creating a lot of data by each department, which are then consumed by other departments or sub manufacturing units to plan their logistics and supply Chain. This intermediate data which gets generated based on milestones achievement in the production process, is mostly a redundant data without any authenticity. This lies in the system consuming a lot of space and memory and in long term and creates sustainability issues. In the dataset provided, 3 stages in the manufacturing are put-up, fabrication, sub-assembly and assembly. (Dataset provided is just an example. Use it to extend the dataset) Provide a possible solution and approach to reduce this underlying intermediate data in the system and make the approach of departments using the data more sustainable in the long term.

👨🏻‍💻  Requirements

● Create a data lake with a normalized DB to reduce the redundancy. ● Identify the current redundant data from the forecasted data. ● Create an automation process for data stamping(approval) the real time data. ● Create a dashboard for the users in each domain to access the data required for their domain and also allow the forecast and real time data creation. ● Create a dashboard for the data officer to monitor the data stamping process.

My Tech Stack & Tools

👨🏻‍💻  Future Aspects

● Cloud computing can help to increase the speed and scalability of data processing. Tools like Amazon Web Services (AWS) and Microsoft Azure can be used to run machine learning algorithms and store large datasets in the cloud. ● Distributed computing frameworks like Apache Hadoop and Apache Spark can be used to speed up data processing and analysis by distributing tasks across multiple computers.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published