Hacker News new | past | comments | ask | show | jobs | submit login

> Also, back then PC applications didn't have too many files and they tended to be much bigger than their Unix counterparts.

Okay, let me interrupt you right here. To this very day Linux has a default maximum number of file descriptors per process as 1024. And select(3), in fact, can't be persuaded to use FDs larger than 1023 without recompiling libc.

Now let's look at Windows XP Home Edition -- you can write a loop of "for (int i = 0; i < 1000000; i++) { char tmp[100]; sprintf(tmp, "%d", i); CreateFile(tmp, GENERIC_ALL, FILE_SHARE_READ, NULL, OPEN_ALWAYS, FILE_ATTRIBUTE_NORMAL, NULL); }" and it will dutifully open a million of file handles in a single process (although it'll take quite some time) with no complaints at all. Also, on Windows, select(3) takes an arbitrary number of socket handles.

I dunno, but it looks to me like Windows was actually designed to handle applications that would work with lots of files simultaneously.

> fundamentally a legacy/out-of-date OS architecture

You probably wanted to write "badly designed OS architecture", because Linux (if you count it as continuation of UNIX) is actually an older OS architecture than Windows.




1024 is a soft limit you can change through ulimit.

The actual limit can be seen via 'sysctl fs.file-max'. On my stock install it's 13160005.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: