Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mtk rf consumes too much RAM #710

Closed
bodqhrohro opened this issue Jun 26, 2023 · 6 comments
Closed

mtk rf consumes too much RAM #710

bodqhrohro opened this issue Jun 26, 2023 · 6 comments

Comments

@bodqhrohro
Copy link

I have a device with 64 GB of flash (Qin F21 Pro), and tried to dump the whole stock firmware. Luckily I noticed that when 56.8% were dumped already, the python process consumed ≈10 GB of RAM (RES+SWAP), so I killed it prematurely before it freezes my system completely (OOM killer tends to kill anything but such hogs, including crucial processes like dbus-daemon, so I don't trust it).

I would suspect it just keeps a copy of everything dumped in RAM, but the amount is few times less, so it's kinda weird.

@bodqhrohro
Copy link
Author

Seems like it's an issue with SMR on an external drive which I'm dumping this onto.

I dumped everything but userdata yesterday, and left userdata alone (≈45 GB) dumping overnight. Today, I discovered that the process was killed at 37 GB.

Then I split the userdata offset/length manually into two even parts and tried to dump them separately with ro. First one was dumped quickly with no fuss, but with the second one a lot of RAM were eaten again. I killed several other greedy processes to let it go, but the process still reached 14.9 GB (4.0 GB RES + 10.9 GB in zRam SWAP), and exhausted the zRam SWAP completely so no free space left there. Since that, the dumping speed dropped drastically from 16 MB/s to less than 1 MB/s, the process was in the D state mostly, but dumping still continued. I checked the size of the image file and noticed it's was only 12 GB at the point, despite the dumping progress indicator was already showing 91%. The HDD LED was flashing through all the way; I watched the file size, it slowly grew two more gigabytes more, but Load Average was slowly growing too, so the system finally went unresponsive and I had to kill it.

If my hypothesis about the write cache is true, it's weird that it's stored in the process memory rather than in the kernel's write cache or in the NTFS-3G process (which the partition on the external HDD has, worth noting). Could you please introduce some write cache limit so mtk stops reading data from the flash memory and waits until the target medium keeps up?

Now gonna split the range into even smaller parts. Pretty meh.

@bodqhrohro
Copy link
Author

Now wut?

I executed in a row:

/media/d/temp/home/virtualenvs/mtkclient/bin/python3.11 /media/d/temp/git/mtkclient/mtk ro 0x0000000131800000 0x6ad27c000 userdata_part1
/media/d/temp/home/virtualenvs/mtkclient/bin/python3.11 /media/d/temp/git/mtkclient/mtk ro 0x7dea7c000 0x35693e000 userdata_part2
/media/d/temp/home/virtualenvs/mtkclient/bin/python3.11 /media/d/temp/git/mtkclient/mtk ro 0xb353ba000 0x35693e000 userdata_part3

And got:

-rwxrwxrwx 1 bodqhrohro bodqhrohro 25000673280 Jun 26 12:55 userdata_part1
-rwxrwxrwx 1 bodqhrohro bodqhrohro 14337433600 Jun 26 15:15 userdata_part2
-rwxrwxrwx 1 bodqhrohro bodqhrohro  9460252672 Jun 26 15:31 userdata_part3

Why is the last part smaller? The process has finished and the file is not growing anymore.

And now I'm not sure if the first part is complete too, so gonna read it in two smaller parts all over again 🤕

@bodqhrohro
Copy link
Author

Same again, now I have this issue with smaller parts too. mtk pumped about 5 GB into RAM, finished dumping, waited for a while and exited prematurily so only 9 of 13 GB were flushed onto HDD. Pretty frustrating.

Luckily, I have another HDD, a stupid one with no SMR (no even S.M.A.R.T., hehe (pun intended)), and dumped complete userdata there with no issues.

@bodqhrohro
Copy link
Author

Sheesh, I was watching how this full userdata is now being copied onto the first HDD, and noticed that the last unfinished file keeps growing too 😱 Long after the process writing it exited.

@bkerler
Copy link
Owner

bkerler commented Jun 26, 2023

it's because threading is used. Normally it should write directly to hdd/ssd and then free the memory. Not sure why it doesn't do that for you.

Copy link

Stale issue message

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jul 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants