Skip to content

MAGNETRON ™ ✭: Nvidia Semantic Segmentation Semantic Segmentation for making a MASKING PROXIA in the ARTIFICIAL INTELLIGENCE 2.0 ™ Framework. ARTIFICIAL INTELLIGENCE 2.0 ™ is part of MAGNETRON ™ TECHNOLOGY.

License

Notifications You must be signed in to change notification settings

GCABC123/magnetron-semantic-segmentation

 
 

Repository files navigation

🤖 THE ABC 123 GROUP ™ 🤖

🌐 GENERAL CONSULTING ABC 123 BY OSAROPRIME ™.

🌐 ABC 123 USA ™

🌐 ABC 123 DESYGN ™

🌐 ABC 123 FILMS ™

=============================================================

                 🌐 MAGENTRON ™ 🌐

🌐 ARTIFICIAL INTELLIGENCE 2.0 ™ : OBJECT MASKING PROXIA A-2 (SEMANTIC SEGMENTATION). (SENSE: VISION_EYE CAMERAS)

*️⃣📶🤖

  • PHYSICAL WORLD SENSE: SIGHT ✅

  • PHYSICAL WORLD SENSE: SMELL

  • PHYSICAL WORLD SENSE: HEARING

  • PHYSICAL WORLD SENSE: TASTE

  • PHYSICAL WORLD SENSE: TOUCH

+++++++++++++++++++++++++++++++++++++

🌐 ASTRAL BODY MINDCLOUD: NO

🌐 PRANIC BODY MINDCLOUD: NO

🌐 INSTINCTIVE MIND MINDCLOUD: ✅

🌐 ASTRAL MIND MINDCLOUD: NO

🌐 PRANIC MIND MINDCLOUD: NO

REQUIREMENTS:

[*] Software Requirements: Google Colab/Jupyter Notebook, Python

[*] HARDWARE REQUIREMENTS: fast TPU/GPU.

[*] DEPENDENCIES: INCLUDED

=============================================================

This is a Google Colab/Jupyter Notebook for developing (one possible scheme for) a MASKING PROXIA when working with ARTIFICIAL INTELLIGENCE 2.0 ™ (ARTIFICIAL INTELLIGENCE 2.0™ is part of MAGNETRON ™ TECHNOLOGY). The machine running the Notebook will be a MINDCLOUD on which you will be developing a PROXIA to detect objects in IMAGES (for example: pictures from social media platforms).

e.g This INSTINCTIVE MIND MINDCLOUD PROXIA can be used to process INFORMATION from the real world via the eye cameras and then send information about the objects detected to the IMAGINATION proxia on ASTRAL MINDCLOUD for the robot to IMAGINE it in different scenarios to better understand what it is and how people see it. So for example if the ROBOT detects a cup with the OBJECT DETECTION on the INSTINCTIVE MIND PROXIA it can imagine a cup in some typical or unusual scenarios (On ASTRAL PROXIA) to better understand what it is and how humans see it.

Prerequisite reading:

THERE ARE 2 MAIN TYPES OF SEGMENTATION (ALSO KNOWN AS MASKING) EXIST:

#SEMANTIC SEGMENTATION

With Semantic segmentation objects shown in an image are grouped based on defined categories. For instance, a street scene would be segmented by “pedestrians,” “bikes,” “vehicles,” “sidewalks,” and so on.

#INSTANCE SEGMENTATION

Instance segmentation is the task of detecting and delineating each distinct object of interest appearing in an image. This is different from SEMANTIC SEGMENTATION. Semantic segmentation associates every pixel of an image with a class label such as a person, flower, car and so on. It treats multiple objects of the same class as a single entity. In contrast, instance segmentation treats multiple objects of the same class as distinct individual instances. Consider instance segmentation a refined version of semantic segmentation. Categories like “vehicles” are split into “cars,” “motorcycles,” “buses,” and so on — instance segmentation detects the instances of each category.

#DIFFERENCE BETWEEN SEMANTIC SEGMENTATION AND INSTANCE SEGMENTATION

Semantic segmentation treats multiple objects within a single category as one entity. Instance segmentation, on the other hand, identifies individual objects within these categories.

👑 INCLUDED STICKERS/SIGN:

FIND STICKERS HERE: https://bit.ly/3B8D3lE

  • PROMOTIONAL MATERIAL FOR 𝗠𝗔𝗚𝗡𝗘𝗧𝗥𝗢𝗡 𝗧𝗘𝗖𝗛𝗡𝗢𝗟𝗢𝗚𝗬 ™. (CUSTOM GRAPHICS BY 𝗔𝗕𝗖 𝟭𝟮𝟯 𝗗𝗘𝗦𝗬𝗚𝗡 ™/𝗢𝗦𝗔𝗥𝗢 𝗛𝗔𝗥𝗥𝗜𝗢𝗧𝗧). THE 𝗠𝗔𝗚𝗡𝗘𝗧𝗥𝗢𝗡 𝗧𝗘𝗖𝗛𝗡𝗢𝗟𝗢𝗚𝗬 ™ SYMBOL/LOGO IS A TRADEMARK OF 𝗧𝗛𝗘 𝗔𝗕𝗖 𝟭𝟮𝟯 𝗚𝗥𝗢𝗨𝗣 ™ FOR 𝗠𝗔𝗚𝗡𝗘𝗧𝗥𝗢𝗡 𝗧𝗘𝗖𝗛𝗡𝗢𝗟𝗢𝗚𝗬 ™. 𝗧𝗛𝗘 𝗔𝗕𝗖 𝟭𝟮𝟯 𝗚𝗥𝗢𝗨𝗣 ™ SYMBOL/LOGO IS A TRADEMARK OF 𝗧𝗛𝗘 𝗔𝗕𝗖 𝟭𝟮𝟯 𝗚𝗥𝗢𝗨𝗣 ™.

*️⃣📶🤖

  • PROMOTIONAL MATERIAL FOR 𝗔𝗥𝗧𝗜𝗙𝗜𝗖𝗜𝗔𝗟 𝗜𝗡𝗧𝗘𝗟𝗟𝗜𝗚𝗘𝗡𝗖𝗘 𝟮.𝟬 ™. (CUSTOM GRAPHICS BY 𝗔𝗕𝗖 𝟭𝟮𝟯 𝗗𝗘𝗦𝗬𝗚𝗡 ™/𝗢𝗦𝗔𝗥𝗢 𝗛𝗔𝗥𝗥𝗜𝗢𝗧𝗧) THE 𝗗𝗥𝗔𝗚𝗢𝗡 & 𝗖𝗥𝗢𝗪𝗡 👑 SYMBOL/LOGO IS A TRADEMARK OF 𝗧𝗛𝗘 𝗔𝗕𝗖 𝟭𝟮𝟯 𝗚𝗥𝗢𝗨𝗣 ™ ASSOCIATED WITH TECHNOLOGY. 𝗧𝗛𝗘 𝗔𝗕𝗖 𝟭𝟮𝟯 𝗚𝗥𝗢𝗨𝗣 ™ SYMBOL/LOGO IS A TRADEMARK OF 𝗧𝗛𝗘 𝗔𝗕𝗖 𝟭𝟮𝟯 𝗚𝗥𝗢𝗨𝗣 ™.

You must display the included stickers/signs (so that it is clearly visible) if you are working with MAGNETRON ™ TECHNOLOGY for the purposes of determining whether you want to purchase a technology license or not. This includes but is not limited to public technology displays, trade shows, technology expos, media appearances, Investor events, Computers (exterior), MINDCLOUD STORAGE (e.g server room doors, render farm room doors) etc.

NOTE: SEE 𝗔𝗥𝗧𝗜𝗙𝗜𝗖𝗜𝗔𝗟 𝗜𝗡𝗧𝗘𝗟𝗟𝗜𝗚𝗘𝗡𝗖𝗘 𝟮.𝟬 ™ DOCUMENTATION FOR INFORMATION ABOUT THE MAIN MASKING PROXIA (ON INSTINCTIVE MIND MINDCLOUD).

NOTE: CLICK HERE FOR A NOTEBOOK ON MAKING MASKING PROXIA WITH INSTANCE SEGMENTATION (INSTEAD OF SEMANTIC SEGMENTATION):


Installation

  • The code is tested with pytorch 1.3 and python 3.6
  • You can use ./Dockerfile to build an image.

Download Weights

  • Create a directory where you can keep large files. Ideally, not in this directory.
  > mkdir <large_asset_dir>
  • Update __C.ASSETS_PATH in config.py to point at that directory

    __C.ASSETS_PATH=<large_asset_dir>

  • Download pretrained weights from google drive and put into <large_asset_dir>/seg_weights

Download/Prepare Data

If using Cityscapes, download Cityscapes data, then update config.py to set the path:

__C.DATASET.CITYSCAPES_DIR=<path_to_cityscapes>

If using Cityscapes Autolabelled Images, download Cityscapes data, then update config.py to set the path:

__C.DATASET.CITYSCAPES_CUSTOMCOARSE=<path_to_cityscapes>

If using Mapillary, download Mapillary data, then update config.py to set the path:

__C.DATASET.MAPILLARY_DIR=<path_to_mapillary>

Running the code

The instructions below make use of a tool called runx, which we find useful to help automate experiment running and summarization. For more information about this tool, please see runx. In general, you can either use the runx-style commandlines shown below. Or you can call python train.py <args ...> directly if you like.

Run inference on Cityscapes

Dry run:

> python -m runx.runx scripts/eval_cityscapes.yml -i -n

This will just print out the command but not run. It's a good way to inspect the commandline.

Real run:

> python -m runx.runx scripts/eval_cityscapes.yml -i

The reported IOU should be 86.92. This evaluates with scales of 0.5, 1.0. and 2.0. You will find evaluation results in ./logs/eval_cityscapes/...

Run inference on Mapillary

> python -m runx.runx scripts/eval_mapillary.yml -i

The reported IOU should be 61.05. Note that this must be run on a 32GB node and the use of 'O3' mode for amp is critical in order to avoid GPU out of memory. Results in logs/eval_mapillary/...

Dump images for Cityscapes

> python -m runx.runx scripts/dump_cityscapes.yml -i

This will dump network output and composited images from running evaluation with the Cityscapes validation set.

Run inference and dump images on a folder of images

> python -m runx.runx scripts/dump_folder.yml -i

You should end up seeing images that look like the following:

alt text

Train a model

Train cityscapes, using HRNet + OCR + multi-scale attention with fine data and mapillary-pretrained model

> python -m runx.runx scripts/train_cityscapes.yml -i

The first time this command is run, a centroid file has to be built for the dataset. It'll take about 10 minutes. The centroid file is used during training to know how to sample from the dataset in a class-uniform way.

This training run should deliver a model that achieves 84.7 IOU.

Train SOTA default train-val split

> python -m runx.runx  scripts/train_cityscapes_sota.yml -i

Again, use -n to do a dry run and just print out the command. This should result in a model with 86.8 IOU. If you run out of memory, try to lower the crop size or turn off rmi_loss.

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.8%
  • Dockerfile 0.2%