Why is unzipped directory much smaller (4.0 K) than zipped (73.0 G)?File was deleted and then reappeared when folder was zippedCommand line alternative to rar.exe with complex switchesRemove sub directories / sub folder from zip + rar filesUnzip speed on Mac versus LinuxSuppress extraction of __MACOSX directory when unzippingHow to zip an archive while keeping some symlinks but not others? (OS X)How to zip folder to odt formatRezip file without creating temporariesZipping while preserving relative directories from anywhereSaving ZIP file attachments and reading them in Git Bash

My employer faked my resume to acquire projects

Should breaking down something like a door be adjudicated as an attempt to beat its AC and HP, or as an ability check against a set DC?

What to do when you've set the wrong ISO for your film?

Did 20% of US soldiers in Vietnam use heroin, 95% of whom quit afterwards?

What are the real benefits of using Salesforce DX?

My players want to grind XP but we're using milestone advancement

Have 1.5% of all nuclear reactors ever built melted down?

Do photons bend spacetime or not?

Python program to take in two strings and print the larger string

Where have Brexit voters gone?

The art of clickbait captions

Is Jon Snow the last of his House?

Who will lead the country until there is a new Tory leader?

Plot twist where the antagonist wins

Why would Ryanair allow me to book this journey through a third party, but not through their own website?

what kind of chord progession is this?

NIntegrate doesn't evaluate

How to illustrate the Mean Value theorem?

Popcorn is the only acceptable snack to consume while watching a movie

I unknowingly submitted plagarised work

What was the idiom for something that we take without a doubt?

Should one buy new hardware after a system compromise?

How did these characters "suit up" so quickly?

Why does this if-statement combining assignment and an equality check return true?



Why is unzipped directory much smaller (4.0 K) than zipped (73.0 G)?


File was deleted and then reappeared when folder was zippedCommand line alternative to rar.exe with complex switchesRemove sub directories / sub folder from zip + rar filesUnzip speed on Mac versus LinuxSuppress extraction of __MACOSX directory when unzippingHow to zip an archive while keeping some symlinks but not others? (OS X)How to zip folder to odt formatRezip file without creating temporariesZipping while preserving relative directories from anywhereSaving ZIP file attachments and reading them in Git Bash






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








38















I unzipped a zipped file using zip -l <filename> but what get is a dir much smaller than what it was before unzipping. Unzipped dir has all the files mostly videos. Why is the unzipped directory exactly 4.0k? Am I missing something?



Bash output of command ls -alh:



drwxrwsr-x 4 shubhankar gen011 4.0K May 19 15:47 Moments_in_Time_256x256_30fps
-rw-rw-r-- 1 shubhankar gen011 73G Mar 1 2018 Moments_in_Time_256x256_30fps.zip











share|improve this question









New contributor



bluedroid is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.














  • 5





    Instead of ls -lah, try using du -h on the directory

    – hojusaram
    May 20 at 2:50






  • 15





    Maybe it would be a good idea to change the question title to something like "Why is my unzipped file only 4KB?"

    – therefromhere
    May 20 at 5:53






  • 4





    @therefromhere No, that would be completely changing the question, and it would be asking about a situation that is not occurring.

    – Scott
    May 22 at 1:04











  • This question is sooooooo duplicated. I wonder why is so highly voted

    – Pedro Lobito
    2 days ago

















38















I unzipped a zipped file using zip -l <filename> but what get is a dir much smaller than what it was before unzipping. Unzipped dir has all the files mostly videos. Why is the unzipped directory exactly 4.0k? Am I missing something?



Bash output of command ls -alh:



drwxrwsr-x 4 shubhankar gen011 4.0K May 19 15:47 Moments_in_Time_256x256_30fps
-rw-rw-r-- 1 shubhankar gen011 73G Mar 1 2018 Moments_in_Time_256x256_30fps.zip











share|improve this question









New contributor



bluedroid is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.














  • 5





    Instead of ls -lah, try using du -h on the directory

    – hojusaram
    May 20 at 2:50






  • 15





    Maybe it would be a good idea to change the question title to something like "Why is my unzipped file only 4KB?"

    – therefromhere
    May 20 at 5:53






  • 4





    @therefromhere No, that would be completely changing the question, and it would be asking about a situation that is not occurring.

    – Scott
    May 22 at 1:04











  • This question is sooooooo duplicated. I wonder why is so highly voted

    – Pedro Lobito
    2 days ago













38












38








38


2






I unzipped a zipped file using zip -l <filename> but what get is a dir much smaller than what it was before unzipping. Unzipped dir has all the files mostly videos. Why is the unzipped directory exactly 4.0k? Am I missing something?



Bash output of command ls -alh:



drwxrwsr-x 4 shubhankar gen011 4.0K May 19 15:47 Moments_in_Time_256x256_30fps
-rw-rw-r-- 1 shubhankar gen011 73G Mar 1 2018 Moments_in_Time_256x256_30fps.zip











share|improve this question









New contributor



bluedroid is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











I unzipped a zipped file using zip -l <filename> but what get is a dir much smaller than what it was before unzipping. Unzipped dir has all the files mostly videos. Why is the unzipped directory exactly 4.0k? Am I missing something?



Bash output of command ls -alh:



drwxrwsr-x 4 shubhankar gen011 4.0K May 19 15:47 Moments_in_Time_256x256_30fps
-rw-rw-r-- 1 shubhankar gen011 73G Mar 1 2018 Moments_in_Time_256x256_30fps.zip








centos zip unzip






share|improve this question









New contributor



bluedroid is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.










share|improve this question









New contributor



bluedroid is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








share|improve this question




share|improve this question








edited 18 hours ago







bluedroid













New contributor



bluedroid is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








asked May 20 at 0:16









bluedroidbluedroid

302136




302136




New contributor



bluedroid is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




New contributor




bluedroid is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









  • 5





    Instead of ls -lah, try using du -h on the directory

    – hojusaram
    May 20 at 2:50






  • 15





    Maybe it would be a good idea to change the question title to something like "Why is my unzipped file only 4KB?"

    – therefromhere
    May 20 at 5:53






  • 4





    @therefromhere No, that would be completely changing the question, and it would be asking about a situation that is not occurring.

    – Scott
    May 22 at 1:04











  • This question is sooooooo duplicated. I wonder why is so highly voted

    – Pedro Lobito
    2 days ago












  • 5





    Instead of ls -lah, try using du -h on the directory

    – hojusaram
    May 20 at 2:50






  • 15





    Maybe it would be a good idea to change the question title to something like "Why is my unzipped file only 4KB?"

    – therefromhere
    May 20 at 5:53






  • 4





    @therefromhere No, that would be completely changing the question, and it would be asking about a situation that is not occurring.

    – Scott
    May 22 at 1:04











  • This question is sooooooo duplicated. I wonder why is so highly voted

    – Pedro Lobito
    2 days ago







5




5





Instead of ls -lah, try using du -h on the directory

– hojusaram
May 20 at 2:50





Instead of ls -lah, try using du -h on the directory

– hojusaram
May 20 at 2:50




15




15





Maybe it would be a good idea to change the question title to something like "Why is my unzipped file only 4KB?"

– therefromhere
May 20 at 5:53





Maybe it would be a good idea to change the question title to something like "Why is my unzipped file only 4KB?"

– therefromhere
May 20 at 5:53




4




4





@therefromhere No, that would be completely changing the question, and it would be asking about a situation that is not occurring.

– Scott
May 22 at 1:04





@therefromhere No, that would be completely changing the question, and it would be asking about a situation that is not occurring.

– Scott
May 22 at 1:04













This question is sooooooo duplicated. I wonder why is so highly voted

– Pedro Lobito
2 days ago





This question is sooooooo duplicated. I wonder why is so highly voted

– Pedro Lobito
2 days ago










1 Answer
1






active

oldest

votes


















148














The size of a directory as shown in your screenshot isn't the sum of the size of the contents, it is the size of the meta-data associated with the directory - file names, etc.



https://unix.stackexchange.com/questions/55/what-does-size-of-a-directory-mean-in-output-of-ls-l-command



To find out how much space the directory contents are using, you can use



du -sh /path/to/directory






share|improve this answer


















  • 21





    And the answer to just why this design decision was made is left to the reader (after running both commands ;-) ).

    – Peter A. Schneider
    May 20 at 11:44






  • 1





    To be fair the filesystem could cache the total size of each directory in the metadata

    – poizan42
    May 20 at 14:01






  • 11





    @poizan42, no, because files could be hardlinked, so you cannot just sum up sizes when walking up the hierarchy.

    – Simon Richter
    May 20 at 14:11






  • 25





    @poizan42 that would be quite inefficient, requiring the filesystem to update all the parents directories at every change (including root dir, its size would change constantly).

    – Erwan
    May 20 at 15:37






  • 8





    @poizan42 That solution is even worse than it would appear on first glance (which is already unacceptably slow): Inodes do not store references to the directories that link them, but just a count. Meaning you'd also have to store a whole lot more of metadata with each inode and worry about keeping everything in sync. Quite an awful lot of overhead and complexity for what would be a rarely used feature.

    – Voo
    May 21 at 11:53










protected by Community 10 hours ago



Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).



Would you like to answer one of these unanswered questions instead?














1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









148














The size of a directory as shown in your screenshot isn't the sum of the size of the contents, it is the size of the meta-data associated with the directory - file names, etc.



https://unix.stackexchange.com/questions/55/what-does-size-of-a-directory-mean-in-output-of-ls-l-command



To find out how much space the directory contents are using, you can use



du -sh /path/to/directory






share|improve this answer


















  • 21





    And the answer to just why this design decision was made is left to the reader (after running both commands ;-) ).

    – Peter A. Schneider
    May 20 at 11:44






  • 1





    To be fair the filesystem could cache the total size of each directory in the metadata

    – poizan42
    May 20 at 14:01






  • 11





    @poizan42, no, because files could be hardlinked, so you cannot just sum up sizes when walking up the hierarchy.

    – Simon Richter
    May 20 at 14:11






  • 25





    @poizan42 that would be quite inefficient, requiring the filesystem to update all the parents directories at every change (including root dir, its size would change constantly).

    – Erwan
    May 20 at 15:37






  • 8





    @poizan42 That solution is even worse than it would appear on first glance (which is already unacceptably slow): Inodes do not store references to the directories that link them, but just a count. Meaning you'd also have to store a whole lot more of metadata with each inode and worry about keeping everything in sync. Quite an awful lot of overhead and complexity for what would be a rarely used feature.

    – Voo
    May 21 at 11:53
















148














The size of a directory as shown in your screenshot isn't the sum of the size of the contents, it is the size of the meta-data associated with the directory - file names, etc.



https://unix.stackexchange.com/questions/55/what-does-size-of-a-directory-mean-in-output-of-ls-l-command



To find out how much space the directory contents are using, you can use



du -sh /path/to/directory






share|improve this answer


















  • 21





    And the answer to just why this design decision was made is left to the reader (after running both commands ;-) ).

    – Peter A. Schneider
    May 20 at 11:44






  • 1





    To be fair the filesystem could cache the total size of each directory in the metadata

    – poizan42
    May 20 at 14:01






  • 11





    @poizan42, no, because files could be hardlinked, so you cannot just sum up sizes when walking up the hierarchy.

    – Simon Richter
    May 20 at 14:11






  • 25





    @poizan42 that would be quite inefficient, requiring the filesystem to update all the parents directories at every change (including root dir, its size would change constantly).

    – Erwan
    May 20 at 15:37






  • 8





    @poizan42 That solution is even worse than it would appear on first glance (which is already unacceptably slow): Inodes do not store references to the directories that link them, but just a count. Meaning you'd also have to store a whole lot more of metadata with each inode and worry about keeping everything in sync. Quite an awful lot of overhead and complexity for what would be a rarely used feature.

    – Voo
    May 21 at 11:53














148












148








148







The size of a directory as shown in your screenshot isn't the sum of the size of the contents, it is the size of the meta-data associated with the directory - file names, etc.



https://unix.stackexchange.com/questions/55/what-does-size-of-a-directory-mean-in-output-of-ls-l-command



To find out how much space the directory contents are using, you can use



du -sh /path/to/directory






share|improve this answer













The size of a directory as shown in your screenshot isn't the sum of the size of the contents, it is the size of the meta-data associated with the directory - file names, etc.



https://unix.stackexchange.com/questions/55/what-does-size-of-a-directory-mean-in-output-of-ls-l-command



To find out how much space the directory contents are using, you can use



du -sh /path/to/directory







share|improve this answer












share|improve this answer



share|improve this answer










answered May 20 at 0:43









ivanivanivanivan

2,034169




2,034169







  • 21





    And the answer to just why this design decision was made is left to the reader (after running both commands ;-) ).

    – Peter A. Schneider
    May 20 at 11:44






  • 1





    To be fair the filesystem could cache the total size of each directory in the metadata

    – poizan42
    May 20 at 14:01






  • 11





    @poizan42, no, because files could be hardlinked, so you cannot just sum up sizes when walking up the hierarchy.

    – Simon Richter
    May 20 at 14:11






  • 25





    @poizan42 that would be quite inefficient, requiring the filesystem to update all the parents directories at every change (including root dir, its size would change constantly).

    – Erwan
    May 20 at 15:37






  • 8





    @poizan42 That solution is even worse than it would appear on first glance (which is already unacceptably slow): Inodes do not store references to the directories that link them, but just a count. Meaning you'd also have to store a whole lot more of metadata with each inode and worry about keeping everything in sync. Quite an awful lot of overhead and complexity for what would be a rarely used feature.

    – Voo
    May 21 at 11:53













  • 21





    And the answer to just why this design decision was made is left to the reader (after running both commands ;-) ).

    – Peter A. Schneider
    May 20 at 11:44






  • 1





    To be fair the filesystem could cache the total size of each directory in the metadata

    – poizan42
    May 20 at 14:01






  • 11





    @poizan42, no, because files could be hardlinked, so you cannot just sum up sizes when walking up the hierarchy.

    – Simon Richter
    May 20 at 14:11






  • 25





    @poizan42 that would be quite inefficient, requiring the filesystem to update all the parents directories at every change (including root dir, its size would change constantly).

    – Erwan
    May 20 at 15:37






  • 8





    @poizan42 That solution is even worse than it would appear on first glance (which is already unacceptably slow): Inodes do not store references to the directories that link them, but just a count. Meaning you'd also have to store a whole lot more of metadata with each inode and worry about keeping everything in sync. Quite an awful lot of overhead and complexity for what would be a rarely used feature.

    – Voo
    May 21 at 11:53








21




21





And the answer to just why this design decision was made is left to the reader (after running both commands ;-) ).

– Peter A. Schneider
May 20 at 11:44





And the answer to just why this design decision was made is left to the reader (after running both commands ;-) ).

– Peter A. Schneider
May 20 at 11:44




1




1





To be fair the filesystem could cache the total size of each directory in the metadata

– poizan42
May 20 at 14:01





To be fair the filesystem could cache the total size of each directory in the metadata

– poizan42
May 20 at 14:01




11




11





@poizan42, no, because files could be hardlinked, so you cannot just sum up sizes when walking up the hierarchy.

– Simon Richter
May 20 at 14:11





@poizan42, no, because files could be hardlinked, so you cannot just sum up sizes when walking up the hierarchy.

– Simon Richter
May 20 at 14:11




25




25





@poizan42 that would be quite inefficient, requiring the filesystem to update all the parents directories at every change (including root dir, its size would change constantly).

– Erwan
May 20 at 15:37





@poizan42 that would be quite inefficient, requiring the filesystem to update all the parents directories at every change (including root dir, its size would change constantly).

– Erwan
May 20 at 15:37




8




8





@poizan42 That solution is even worse than it would appear on first glance (which is already unacceptably slow): Inodes do not store references to the directories that link them, but just a count. Meaning you'd also have to store a whole lot more of metadata with each inode and worry about keeping everything in sync. Quite an awful lot of overhead and complexity for what would be a rarely used feature.

– Voo
May 21 at 11:53






@poizan42 That solution is even worse than it would appear on first glance (which is already unacceptably slow): Inodes do not store references to the directories that link them, but just a count. Meaning you'd also have to store a whole lot more of metadata with each inode and worry about keeping everything in sync. Quite an awful lot of overhead and complexity for what would be a rarely used feature.

– Voo
May 21 at 11:53






protected by Community 10 hours ago



Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).



Would you like to answer one of these unanswered questions instead?



Popular posts from this blog

Category:9 (number) SubcategoriesMedia in category "9 (number)"Navigation menuUpload mediaGND ID: 4485639-8Library of Congress authority ID: sh85091979ReasonatorScholiaStatistics

Circuit construction for execution of conditional statements using least significant bitHow are two different registers being used as “control”?How exactly is the stated composite state of the two registers being produced using the $R_zz$ controlled rotations?Efficiently performing controlled rotations in HHLWould this quantum algorithm implementation work?How to prepare a superposed states of odd integers from $1$ to $sqrtN$?Why is this implementation of the order finding algorithm not working?Circuit construction for Hamiltonian simulationHow can I invert the least significant bit of a certain term of a superposed state?Implementing an oracleImplementing a controlled sum operation

Magento 2 “No Payment Methods” in Admin New OrderHow to integrate Paypal Express Checkout with the Magento APIMagento 1.5 - Sales > Order > edit order and shipping methods disappearAuto Invoice Check/Money Order Payment methodAdd more simple payment methods?Shipping methods not showingWhat should I do to change payment methods if changing the configuration has no effects?1.9 - No Payment Methods showing upMy Payment Methods not Showing for downloadable/virtual product when checkout?Magento2 API to access internal payment methodHow to call an existing payment methods in the registration form?