Why is unzipped directory much smaller (4.0 K) than zipped (73.0 G)?File was deleted and then reappeared when folder was zippedCommand line alternative to rar.exe with complex switchesRemove sub directories / sub folder from zip + rar filesUnzip speed on Mac versus LinuxSuppress extraction of __MACOSX directory when unzippingHow to zip an archive while keeping some symlinks but not others? (OS X)How to zip folder to odt formatRezip file without creating temporariesZipping while preserving relative directories from anywhereSaving ZIP file attachments and reading them in Git Bash
My employer faked my resume to acquire projects
Should breaking down something like a door be adjudicated as an attempt to beat its AC and HP, or as an ability check against a set DC?
What to do when you've set the wrong ISO for your film?
Did 20% of US soldiers in Vietnam use heroin, 95% of whom quit afterwards?
What are the real benefits of using Salesforce DX?
My players want to grind XP but we're using milestone advancement
Have 1.5% of all nuclear reactors ever built melted down?
Do photons bend spacetime or not?
Python program to take in two strings and print the larger string
Where have Brexit voters gone?
The art of clickbait captions
Is Jon Snow the last of his House?
Who will lead the country until there is a new Tory leader?
Plot twist where the antagonist wins
Why would Ryanair allow me to book this journey through a third party, but not through their own website?
what kind of chord progession is this?
NIntegrate doesn't evaluate
How to illustrate the Mean Value theorem?
Popcorn is the only acceptable snack to consume while watching a movie
I unknowingly submitted plagarised work
What was the idiom for something that we take without a doubt?
Should one buy new hardware after a system compromise?
How did these characters "suit up" so quickly?
Why does this if-statement combining assignment and an equality check return true?
Why is unzipped directory much smaller (4.0 K) than zipped (73.0 G)?
File was deleted and then reappeared when folder was zippedCommand line alternative to rar.exe with complex switchesRemove sub directories / sub folder from zip + rar filesUnzip speed on Mac versus LinuxSuppress extraction of __MACOSX directory when unzippingHow to zip an archive while keeping some symlinks but not others? (OS X)How to zip folder to odt formatRezip file without creating temporariesZipping while preserving relative directories from anywhereSaving ZIP file attachments and reading them in Git Bash
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;
I unzipped a zipped file using zip -l <filename>
but what get is a dir much smaller than what it was before unzipping. Unzipped dir has all the files mostly videos. Why is the unzipped directory exactly 4.0k? Am I missing something?
Bash output of command ls -alh
:
drwxrwsr-x 4 shubhankar gen011 4.0K May 19 15:47 Moments_in_Time_256x256_30fps
-rw-rw-r-- 1 shubhankar gen011 73G Mar 1 2018 Moments_in_Time_256x256_30fps.zip
centos zip unzip
New contributor
add a comment |
I unzipped a zipped file using zip -l <filename>
but what get is a dir much smaller than what it was before unzipping. Unzipped dir has all the files mostly videos. Why is the unzipped directory exactly 4.0k? Am I missing something?
Bash output of command ls -alh
:
drwxrwsr-x 4 shubhankar gen011 4.0K May 19 15:47 Moments_in_Time_256x256_30fps
-rw-rw-r-- 1 shubhankar gen011 73G Mar 1 2018 Moments_in_Time_256x256_30fps.zip
centos zip unzip
New contributor
5
Instead of ls -lah, try using du -h on the directory
– hojusaram
May 20 at 2:50
15
Maybe it would be a good idea to change the question title to something like "Why is my unzipped file only 4KB?"
– therefromhere
May 20 at 5:53
4
@therefromhere No, that would be completely changing the question, and it would be asking about a situation that is not occurring.
– Scott
May 22 at 1:04
This question is sooooooo duplicated. I wonder why is so highly voted
– Pedro Lobito
2 days ago
add a comment |
I unzipped a zipped file using zip -l <filename>
but what get is a dir much smaller than what it was before unzipping. Unzipped dir has all the files mostly videos. Why is the unzipped directory exactly 4.0k? Am I missing something?
Bash output of command ls -alh
:
drwxrwsr-x 4 shubhankar gen011 4.0K May 19 15:47 Moments_in_Time_256x256_30fps
-rw-rw-r-- 1 shubhankar gen011 73G Mar 1 2018 Moments_in_Time_256x256_30fps.zip
centos zip unzip
New contributor
I unzipped a zipped file using zip -l <filename>
but what get is a dir much smaller than what it was before unzipping. Unzipped dir has all the files mostly videos. Why is the unzipped directory exactly 4.0k? Am I missing something?
Bash output of command ls -alh
:
drwxrwsr-x 4 shubhankar gen011 4.0K May 19 15:47 Moments_in_Time_256x256_30fps
-rw-rw-r-- 1 shubhankar gen011 73G Mar 1 2018 Moments_in_Time_256x256_30fps.zip
centos zip unzip
centos zip unzip
New contributor
New contributor
edited 18 hours ago
bluedroid
New contributor
asked May 20 at 0:16
bluedroidbluedroid
302136
302136
New contributor
New contributor
5
Instead of ls -lah, try using du -h on the directory
– hojusaram
May 20 at 2:50
15
Maybe it would be a good idea to change the question title to something like "Why is my unzipped file only 4KB?"
– therefromhere
May 20 at 5:53
4
@therefromhere No, that would be completely changing the question, and it would be asking about a situation that is not occurring.
– Scott
May 22 at 1:04
This question is sooooooo duplicated. I wonder why is so highly voted
– Pedro Lobito
2 days ago
add a comment |
5
Instead of ls -lah, try using du -h on the directory
– hojusaram
May 20 at 2:50
15
Maybe it would be a good idea to change the question title to something like "Why is my unzipped file only 4KB?"
– therefromhere
May 20 at 5:53
4
@therefromhere No, that would be completely changing the question, and it would be asking about a situation that is not occurring.
– Scott
May 22 at 1:04
This question is sooooooo duplicated. I wonder why is so highly voted
– Pedro Lobito
2 days ago
5
5
Instead of ls -lah, try using du -h on the directory
– hojusaram
May 20 at 2:50
Instead of ls -lah, try using du -h on the directory
– hojusaram
May 20 at 2:50
15
15
Maybe it would be a good idea to change the question title to something like "Why is my unzipped file only 4KB?"
– therefromhere
May 20 at 5:53
Maybe it would be a good idea to change the question title to something like "Why is my unzipped file only 4KB?"
– therefromhere
May 20 at 5:53
4
4
@therefromhere No, that would be completely changing the question, and it would be asking about a situation that is not occurring.
– Scott
May 22 at 1:04
@therefromhere No, that would be completely changing the question, and it would be asking about a situation that is not occurring.
– Scott
May 22 at 1:04
This question is sooooooo duplicated. I wonder why is so highly voted
– Pedro Lobito
2 days ago
This question is sooooooo duplicated. I wonder why is so highly voted
– Pedro Lobito
2 days ago
add a comment |
1 Answer
1
active
oldest
votes
The size of a directory as shown in your screenshot isn't the sum of the size of the contents, it is the size of the meta-data associated with the directory - file names, etc.
https://unix.stackexchange.com/questions/55/what-does-size-of-a-directory-mean-in-output-of-ls-l-command
To find out how much space the directory contents are using, you can use
du -sh /path/to/directory
21
And the answer to just why this design decision was made is left to the reader (after running both commands ;-) ).
– Peter A. Schneider
May 20 at 11:44
1
To be fair the filesystem could cache the total size of each directory in the metadata
– poizan42
May 20 at 14:01
11
@poizan42, no, because files could be hardlinked, so you cannot just sum up sizes when walking up the hierarchy.
– Simon Richter
May 20 at 14:11
25
@poizan42 that would be quite inefficient, requiring the filesystem to update all the parents directories at every change (including root dir, its size would change constantly).
– Erwan
May 20 at 15:37
8
@poizan42 That solution is even worse than it would appear on first glance (which is already unacceptably slow): Inodes do not store references to the directories that link them, but just a count. Meaning you'd also have to store a whole lot more of metadata with each inode and worry about keeping everything in sync. Quite an awful lot of overhead and complexity for what would be a rarely used feature.
– Voo
May 21 at 11:53
|
show 15 more comments
protected by Community♦ 10 hours ago
Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
The size of a directory as shown in your screenshot isn't the sum of the size of the contents, it is the size of the meta-data associated with the directory - file names, etc.
https://unix.stackexchange.com/questions/55/what-does-size-of-a-directory-mean-in-output-of-ls-l-command
To find out how much space the directory contents are using, you can use
du -sh /path/to/directory
21
And the answer to just why this design decision was made is left to the reader (after running both commands ;-) ).
– Peter A. Schneider
May 20 at 11:44
1
To be fair the filesystem could cache the total size of each directory in the metadata
– poizan42
May 20 at 14:01
11
@poizan42, no, because files could be hardlinked, so you cannot just sum up sizes when walking up the hierarchy.
– Simon Richter
May 20 at 14:11
25
@poizan42 that would be quite inefficient, requiring the filesystem to update all the parents directories at every change (including root dir, its size would change constantly).
– Erwan
May 20 at 15:37
8
@poizan42 That solution is even worse than it would appear on first glance (which is already unacceptably slow): Inodes do not store references to the directories that link them, but just a count. Meaning you'd also have to store a whole lot more of metadata with each inode and worry about keeping everything in sync. Quite an awful lot of overhead and complexity for what would be a rarely used feature.
– Voo
May 21 at 11:53
|
show 15 more comments
The size of a directory as shown in your screenshot isn't the sum of the size of the contents, it is the size of the meta-data associated with the directory - file names, etc.
https://unix.stackexchange.com/questions/55/what-does-size-of-a-directory-mean-in-output-of-ls-l-command
To find out how much space the directory contents are using, you can use
du -sh /path/to/directory
21
And the answer to just why this design decision was made is left to the reader (after running both commands ;-) ).
– Peter A. Schneider
May 20 at 11:44
1
To be fair the filesystem could cache the total size of each directory in the metadata
– poizan42
May 20 at 14:01
11
@poizan42, no, because files could be hardlinked, so you cannot just sum up sizes when walking up the hierarchy.
– Simon Richter
May 20 at 14:11
25
@poizan42 that would be quite inefficient, requiring the filesystem to update all the parents directories at every change (including root dir, its size would change constantly).
– Erwan
May 20 at 15:37
8
@poizan42 That solution is even worse than it would appear on first glance (which is already unacceptably slow): Inodes do not store references to the directories that link them, but just a count. Meaning you'd also have to store a whole lot more of metadata with each inode and worry about keeping everything in sync. Quite an awful lot of overhead and complexity for what would be a rarely used feature.
– Voo
May 21 at 11:53
|
show 15 more comments
The size of a directory as shown in your screenshot isn't the sum of the size of the contents, it is the size of the meta-data associated with the directory - file names, etc.
https://unix.stackexchange.com/questions/55/what-does-size-of-a-directory-mean-in-output-of-ls-l-command
To find out how much space the directory contents are using, you can use
du -sh /path/to/directory
The size of a directory as shown in your screenshot isn't the sum of the size of the contents, it is the size of the meta-data associated with the directory - file names, etc.
https://unix.stackexchange.com/questions/55/what-does-size-of-a-directory-mean-in-output-of-ls-l-command
To find out how much space the directory contents are using, you can use
du -sh /path/to/directory
answered May 20 at 0:43
ivanivanivanivan
2,034169
2,034169
21
And the answer to just why this design decision was made is left to the reader (after running both commands ;-) ).
– Peter A. Schneider
May 20 at 11:44
1
To be fair the filesystem could cache the total size of each directory in the metadata
– poizan42
May 20 at 14:01
11
@poizan42, no, because files could be hardlinked, so you cannot just sum up sizes when walking up the hierarchy.
– Simon Richter
May 20 at 14:11
25
@poizan42 that would be quite inefficient, requiring the filesystem to update all the parents directories at every change (including root dir, its size would change constantly).
– Erwan
May 20 at 15:37
8
@poizan42 That solution is even worse than it would appear on first glance (which is already unacceptably slow): Inodes do not store references to the directories that link them, but just a count. Meaning you'd also have to store a whole lot more of metadata with each inode and worry about keeping everything in sync. Quite an awful lot of overhead and complexity for what would be a rarely used feature.
– Voo
May 21 at 11:53
|
show 15 more comments
21
And the answer to just why this design decision was made is left to the reader (after running both commands ;-) ).
– Peter A. Schneider
May 20 at 11:44
1
To be fair the filesystem could cache the total size of each directory in the metadata
– poizan42
May 20 at 14:01
11
@poizan42, no, because files could be hardlinked, so you cannot just sum up sizes when walking up the hierarchy.
– Simon Richter
May 20 at 14:11
25
@poizan42 that would be quite inefficient, requiring the filesystem to update all the parents directories at every change (including root dir, its size would change constantly).
– Erwan
May 20 at 15:37
8
@poizan42 That solution is even worse than it would appear on first glance (which is already unacceptably slow): Inodes do not store references to the directories that link them, but just a count. Meaning you'd also have to store a whole lot more of metadata with each inode and worry about keeping everything in sync. Quite an awful lot of overhead and complexity for what would be a rarely used feature.
– Voo
May 21 at 11:53
21
21
And the answer to just why this design decision was made is left to the reader (after running both commands ;-) ).
– Peter A. Schneider
May 20 at 11:44
And the answer to just why this design decision was made is left to the reader (after running both commands ;-) ).
– Peter A. Schneider
May 20 at 11:44
1
1
To be fair the filesystem could cache the total size of each directory in the metadata
– poizan42
May 20 at 14:01
To be fair the filesystem could cache the total size of each directory in the metadata
– poizan42
May 20 at 14:01
11
11
@poizan42, no, because files could be hardlinked, so you cannot just sum up sizes when walking up the hierarchy.
– Simon Richter
May 20 at 14:11
@poizan42, no, because files could be hardlinked, so you cannot just sum up sizes when walking up the hierarchy.
– Simon Richter
May 20 at 14:11
25
25
@poizan42 that would be quite inefficient, requiring the filesystem to update all the parents directories at every change (including root dir, its size would change constantly).
– Erwan
May 20 at 15:37
@poizan42 that would be quite inefficient, requiring the filesystem to update all the parents directories at every change (including root dir, its size would change constantly).
– Erwan
May 20 at 15:37
8
8
@poizan42 That solution is even worse than it would appear on first glance (which is already unacceptably slow): Inodes do not store references to the directories that link them, but just a count. Meaning you'd also have to store a whole lot more of metadata with each inode and worry about keeping everything in sync. Quite an awful lot of overhead and complexity for what would be a rarely used feature.
– Voo
May 21 at 11:53
@poizan42 That solution is even worse than it would appear on first glance (which is already unacceptably slow): Inodes do not store references to the directories that link them, but just a count. Meaning you'd also have to store a whole lot more of metadata with each inode and worry about keeping everything in sync. Quite an awful lot of overhead and complexity for what would be a rarely used feature.
– Voo
May 21 at 11:53
|
show 15 more comments
protected by Community♦ 10 hours ago
Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
5
Instead of ls -lah, try using du -h on the directory
– hojusaram
May 20 at 2:50
15
Maybe it would be a good idea to change the question title to something like "Why is my unzipped file only 4KB?"
– therefromhere
May 20 at 5:53
4
@therefromhere No, that would be completely changing the question, and it would be asking about a situation that is not occurring.
– Scott
May 22 at 1:04
This question is sooooooo duplicated. I wonder why is so highly voted
– Pedro Lobito
2 days ago