Size of a folder with duTiny directory on ext4 file system taking up 2024 blocks?Any implementations of unix with “flat”, tag-based file systems?How to compare two tar archives (including file content, new/removed files, symlinks)?How to get folder size ignoring hard links?How can I unveal files on my external hard drive?Are files defined by their content blocks, inodes, both, or filenames?What is the difference in file size between Symbolic and Hard links?Copy last used files of total sizeHow to delete files and directories that are not in backup directoryextremely slow listing with many filesHow can I copy sparse files from a ext4 file system to an exfat file system and keep the apparent size?

How does the Heat Metal spell interact with a follow-up Frostbite spell?

How to generate a triangular grid from a list of points

Is there an academic word that means "to split hairs over"?

Holding rent money for my friend which amounts to over $10k?

Polynomial division: Is this trick obvious?

Why is so much ransomware breakable?

Who is frowning in the sentence "Daisy looked at Tom frowning"?

Resistor Selection to retain same brightness in LED PWM circuit

What do astronauts do with their trash on the ISS?

Usage of the relative pronoun "dont"

Square spiral in Mathematica

Can I pay my credit card?

Canadian citizen who is presently in litigation with a US-based company

Why would company (decision makers) wait for someone to retire, rather than lay them off, when their role is no longer needed?

What kind of action are dodge and disengage?

refer string as a field API name

Single word that parallels "Recent" when discussing the near future

What are the effects of eating many berries from the Goodberry spell per day?

How to deal with the extreme reverberation in big cathedrals when playing the pipe organs?

Why do academics prefer Mac/Linux?

Is it possible to pass a pointer to an operator as an argument like a pointer to a function?

How can we delete item permanently without storing in Recycle Bin?

How to know the path of a particular software?

Find the area of the rectangle



Size of a folder with du


Tiny directory on ext4 file system taking up 2024 blocks?Any implementations of unix with “flat”, tag-based file systems?How to compare two tar archives (including file content, new/removed files, symlinks)?How to get folder size ignoring hard links?How can I unveal files on my external hard drive?Are files defined by their content blocks, inodes, both, or filenames?What is the difference in file size between Symbolic and Hard links?Copy last used files of total sizeHow to delete files and directories that are not in backup directoryextremely slow listing with many filesHow can I copy sparse files from a ext4 file system to an exfat file system and keep the apparent size?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








4















I have copied a folder using rsync including symlinks, hard links, permissions, deleting files on destination et cetera. They should be pretty identical.



One folder is on a USB drive and the other on a local disk.



If I run: du -bls on both folders, the size comes up as slightly different.



My du supports --apparent-size and it is applied by -s and -l should count the content of the hard links.



How can this difference be explained and how do I get the actual total?



Both file systems are ext4, the only difference is that the USB drive is encrypted.



EDIT:



I digged down to find the folders that were actually different, I found one and the content is not special (no block device, no pipes, no hard or symlinks, no zero bytes files), the peculiarity may be having several small files within it. The difference is 872830 vs 881022 of this particular folder.



I also ran du -blsc in both folders and the result is the same in this case.



Some extra details on the commands I used:



$ du -Pbsl $LOCALDIR $USBDIR | cut -f1
872830
881022

$ du -Pbslc $LOCALDIR/*
[...]
868734 total

$ du -Pbslc $USBDIR/*
[...]
868734 total

$ ls -la $USBDIR | wc
158 1415 9123
$ ls -la $LOCALDIR | wc
158 1415 9123

$ diff -sqr --no-dereference $LOCALDIR $USBDIR | grep -v identical
[No output and all identical if I remove the grep]









share|improve this question



















  • 5





    Define "pretty identical" and "slightly different".

    – Kusalananda
    May 11 at 14:47











  • How much of a difference?

    – Atul
    May 11 at 14:55











  • @Kusalananda pretty identical = identical content, user, permissions, timestamp for each file/folder, slightly different = a small amount of bytes in difference

    – Stefano d'Antonio
    May 11 at 14:56












  • @Atul 836034841990 vs 836037115270 (the content is then roughly 800GB)

    – Stefano d'Antonio
    May 11 at 14:57











  • That's about 2 MB. Are you able to run a md5sum over the files and then verify that against the other set? I wonder if you have a lot of directories that could account for the difference (some filesystems don't truncate the directories when you delete entries)?

    – Kusalananda
    May 11 at 15:00


















4















I have copied a folder using rsync including symlinks, hard links, permissions, deleting files on destination et cetera. They should be pretty identical.



One folder is on a USB drive and the other on a local disk.



If I run: du -bls on both folders, the size comes up as slightly different.



My du supports --apparent-size and it is applied by -s and -l should count the content of the hard links.



How can this difference be explained and how do I get the actual total?



Both file systems are ext4, the only difference is that the USB drive is encrypted.



EDIT:



I digged down to find the folders that were actually different, I found one and the content is not special (no block device, no pipes, no hard or symlinks, no zero bytes files), the peculiarity may be having several small files within it. The difference is 872830 vs 881022 of this particular folder.



I also ran du -blsc in both folders and the result is the same in this case.



Some extra details on the commands I used:



$ du -Pbsl $LOCALDIR $USBDIR | cut -f1
872830
881022

$ du -Pbslc $LOCALDIR/*
[...]
868734 total

$ du -Pbslc $USBDIR/*
[...]
868734 total

$ ls -la $USBDIR | wc
158 1415 9123
$ ls -la $LOCALDIR | wc
158 1415 9123

$ diff -sqr --no-dereference $LOCALDIR $USBDIR | grep -v identical
[No output and all identical if I remove the grep]









share|improve this question



















  • 5





    Define "pretty identical" and "slightly different".

    – Kusalananda
    May 11 at 14:47











  • How much of a difference?

    – Atul
    May 11 at 14:55











  • @Kusalananda pretty identical = identical content, user, permissions, timestamp for each file/folder, slightly different = a small amount of bytes in difference

    – Stefano d'Antonio
    May 11 at 14:56












  • @Atul 836034841990 vs 836037115270 (the content is then roughly 800GB)

    – Stefano d'Antonio
    May 11 at 14:57











  • That's about 2 MB. Are you able to run a md5sum over the files and then verify that against the other set? I wonder if you have a lot of directories that could account for the difference (some filesystems don't truncate the directories when you delete entries)?

    – Kusalananda
    May 11 at 15:00














4












4








4


1






I have copied a folder using rsync including symlinks, hard links, permissions, deleting files on destination et cetera. They should be pretty identical.



One folder is on a USB drive and the other on a local disk.



If I run: du -bls on both folders, the size comes up as slightly different.



My du supports --apparent-size and it is applied by -s and -l should count the content of the hard links.



How can this difference be explained and how do I get the actual total?



Both file systems are ext4, the only difference is that the USB drive is encrypted.



EDIT:



I digged down to find the folders that were actually different, I found one and the content is not special (no block device, no pipes, no hard or symlinks, no zero bytes files), the peculiarity may be having several small files within it. The difference is 872830 vs 881022 of this particular folder.



I also ran du -blsc in both folders and the result is the same in this case.



Some extra details on the commands I used:



$ du -Pbsl $LOCALDIR $USBDIR | cut -f1
872830
881022

$ du -Pbslc $LOCALDIR/*
[...]
868734 total

$ du -Pbslc $USBDIR/*
[...]
868734 total

$ ls -la $USBDIR | wc
158 1415 9123
$ ls -la $LOCALDIR | wc
158 1415 9123

$ diff -sqr --no-dereference $LOCALDIR $USBDIR | grep -v identical
[No output and all identical if I remove the grep]









share|improve this question
















I have copied a folder using rsync including symlinks, hard links, permissions, deleting files on destination et cetera. They should be pretty identical.



One folder is on a USB drive and the other on a local disk.



If I run: du -bls on both folders, the size comes up as slightly different.



My du supports --apparent-size and it is applied by -s and -l should count the content of the hard links.



How can this difference be explained and how do I get the actual total?



Both file systems are ext4, the only difference is that the USB drive is encrypted.



EDIT:



I digged down to find the folders that were actually different, I found one and the content is not special (no block device, no pipes, no hard or symlinks, no zero bytes files), the peculiarity may be having several small files within it. The difference is 872830 vs 881022 of this particular folder.



I also ran du -blsc in both folders and the result is the same in this case.



Some extra details on the commands I used:



$ du -Pbsl $LOCALDIR $USBDIR | cut -f1
872830
881022

$ du -Pbslc $LOCALDIR/*
[...]
868734 total

$ du -Pbslc $USBDIR/*
[...]
868734 total

$ ls -la $USBDIR | wc
158 1415 9123
$ ls -la $LOCALDIR | wc
158 1415 9123

$ diff -sqr --no-dereference $LOCALDIR $USBDIR | grep -v identical
[No output and all identical if I remove the grep]






linux bash files filesystems






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited May 11 at 15:10







Stefano d'Antonio

















asked May 11 at 14:43









Stefano d'AntonioStefano d'Antonio

1786




1786







  • 5





    Define "pretty identical" and "slightly different".

    – Kusalananda
    May 11 at 14:47











  • How much of a difference?

    – Atul
    May 11 at 14:55











  • @Kusalananda pretty identical = identical content, user, permissions, timestamp for each file/folder, slightly different = a small amount of bytes in difference

    – Stefano d'Antonio
    May 11 at 14:56












  • @Atul 836034841990 vs 836037115270 (the content is then roughly 800GB)

    – Stefano d'Antonio
    May 11 at 14:57











  • That's about 2 MB. Are you able to run a md5sum over the files and then verify that against the other set? I wonder if you have a lot of directories that could account for the difference (some filesystems don't truncate the directories when you delete entries)?

    – Kusalananda
    May 11 at 15:00













  • 5





    Define "pretty identical" and "slightly different".

    – Kusalananda
    May 11 at 14:47











  • How much of a difference?

    – Atul
    May 11 at 14:55











  • @Kusalananda pretty identical = identical content, user, permissions, timestamp for each file/folder, slightly different = a small amount of bytes in difference

    – Stefano d'Antonio
    May 11 at 14:56












  • @Atul 836034841990 vs 836037115270 (the content is then roughly 800GB)

    – Stefano d'Antonio
    May 11 at 14:57











  • That's about 2 MB. Are you able to run a md5sum over the files and then verify that against the other set? I wonder if you have a lot of directories that could account for the difference (some filesystems don't truncate the directories when you delete entries)?

    – Kusalananda
    May 11 at 15:00








5




5





Define "pretty identical" and "slightly different".

– Kusalananda
May 11 at 14:47





Define "pretty identical" and "slightly different".

– Kusalananda
May 11 at 14:47













How much of a difference?

– Atul
May 11 at 14:55





How much of a difference?

– Atul
May 11 at 14:55













@Kusalananda pretty identical = identical content, user, permissions, timestamp for each file/folder, slightly different = a small amount of bytes in difference

– Stefano d'Antonio
May 11 at 14:56






@Kusalananda pretty identical = identical content, user, permissions, timestamp for each file/folder, slightly different = a small amount of bytes in difference

– Stefano d'Antonio
May 11 at 14:56














@Atul 836034841990 vs 836037115270 (the content is then roughly 800GB)

– Stefano d'Antonio
May 11 at 14:57





@Atul 836034841990 vs 836037115270 (the content is then roughly 800GB)

– Stefano d'Antonio
May 11 at 14:57













That's about 2 MB. Are you able to run a md5sum over the files and then verify that against the other set? I wonder if you have a lot of directories that could account for the difference (some filesystems don't truncate the directories when you delete entries)?

– Kusalananda
May 11 at 15:00






That's about 2 MB. Are you able to run a md5sum over the files and then verify that against the other set? I wonder if you have a lot of directories that could account for the difference (some filesystems don't truncate the directories when you delete entries)?

– Kusalananda
May 11 at 15:00











2 Answers
2






active

oldest

votes


















11














Since you have copied the files using rsync and then compared the two sets of files using diff, and since diff reports no difference, the two sets of files are identical.



The size difference can then probably be explained by the sizes of the actual directory nodes within the two directory structures. On some filesystems, the directory is not truncated if a file or subdirectory is deleted, leaving a directory node that is slightly larger than what's actually needed.



If you have, at some point, kept many files that were later deleted, this might have left large directory nodes.



Example:



$ mkdir dir
$ ls -ld dir
drwxr-xr-x 2 kk wheel 512 May 11 17:09 dir




$ touch dir/file-1..1000
$ ls -ld dir
drwxr-xr-x 2 kk wheel 20480 May 11 17:09 dir




$ rm dir/*
$ ls -ld dir
drwxr-xr-x 2 kk wheel 20480 May 11 17:09 dir
$ du -h .
20.0K ./dir
42.0K .
$ ls -R
dir

./dir:


Notice how, even though I deleted the 1000 files I created, the dir directory still uses 20 KB.






share|improve this answer























  • That is quite interesting, I also used du -bs and I was able to reproduce what you described. Would be interesting to know what ext4 does.

    – Stefano d'Antonio
    May 11 at 15:17











  • @Stefanod'Antonio I believe that ext4 behaves the same. My tests were on an OpenBSD system using its native FFS filesystem.

    – Kusalananda
    May 11 at 15:20












  • @Kusalanada what I meant is how this works behind the scenes: what's the threshold and why it does that.

    – Stefano d'Antonio
    May 11 at 15:21






  • 1





    @Stefanod'Antonio Possibly to reduce filesystem fragmentation. There is no threshold. The directory node is simply never truncated.

    – Kusalananda
    May 11 at 15:22






  • 1





    @danieldeveloper001 I'm not a Linux user and don't know if there's some specific tool for doing this on ext4 filesystems, but the portable way would be to move the contents of a directory to a new directory and then rmdir the original directory. Or, for a whole hierarchy, use rsync to copy it, then delete the original (as the user in the question actually did).

    – Kusalananda
    May 11 at 15:31



















1














Have you checked the filesystem block size? Even though both devices use the same filesystem, it is a possibility that the block sizes are different and this could explain the "slightly different" sizes.



When, for instance, storing a bunch of 1KiB files in a device with filesystem set to use 8KiB block size, there will be a waste of 7KiB per used block. The actual size that your files are taking from your disk, is the size of the used blocks, not the size of the files itself in this case (unless there is some kind of tool to store multiple files per block). Try checking your different devices block size with the command below.



# blockdev --getbsz <DEVICE>





share|improve this answer








New contributor



danieldeveloper001 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.



















  • Should this not be covered by using --block-size=1 as a du option?

    – Stefano d'Antonio
    May 11 at 15:17











  • Just checked they both return 4096.

    – Stefano d'Antonio
    May 11 at 15:18











  • Sorry, started writing before your edit ;)

    – danieldeveloper001
    May 11 at 15:18











  • No worries, it was a good guess.

    – Stefano d'Antonio
    May 11 at 15:19











Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "106"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f518424%2fsize-of-a-folder-with-du%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes









11














Since you have copied the files using rsync and then compared the two sets of files using diff, and since diff reports no difference, the two sets of files are identical.



The size difference can then probably be explained by the sizes of the actual directory nodes within the two directory structures. On some filesystems, the directory is not truncated if a file or subdirectory is deleted, leaving a directory node that is slightly larger than what's actually needed.



If you have, at some point, kept many files that were later deleted, this might have left large directory nodes.



Example:



$ mkdir dir
$ ls -ld dir
drwxr-xr-x 2 kk wheel 512 May 11 17:09 dir




$ touch dir/file-1..1000
$ ls -ld dir
drwxr-xr-x 2 kk wheel 20480 May 11 17:09 dir




$ rm dir/*
$ ls -ld dir
drwxr-xr-x 2 kk wheel 20480 May 11 17:09 dir
$ du -h .
20.0K ./dir
42.0K .
$ ls -R
dir

./dir:


Notice how, even though I deleted the 1000 files I created, the dir directory still uses 20 KB.






share|improve this answer























  • That is quite interesting, I also used du -bs and I was able to reproduce what you described. Would be interesting to know what ext4 does.

    – Stefano d'Antonio
    May 11 at 15:17











  • @Stefanod'Antonio I believe that ext4 behaves the same. My tests were on an OpenBSD system using its native FFS filesystem.

    – Kusalananda
    May 11 at 15:20












  • @Kusalanada what I meant is how this works behind the scenes: what's the threshold and why it does that.

    – Stefano d'Antonio
    May 11 at 15:21






  • 1





    @Stefanod'Antonio Possibly to reduce filesystem fragmentation. There is no threshold. The directory node is simply never truncated.

    – Kusalananda
    May 11 at 15:22






  • 1





    @danieldeveloper001 I'm not a Linux user and don't know if there's some specific tool for doing this on ext4 filesystems, but the portable way would be to move the contents of a directory to a new directory and then rmdir the original directory. Or, for a whole hierarchy, use rsync to copy it, then delete the original (as the user in the question actually did).

    – Kusalananda
    May 11 at 15:31
















11














Since you have copied the files using rsync and then compared the two sets of files using diff, and since diff reports no difference, the two sets of files are identical.



The size difference can then probably be explained by the sizes of the actual directory nodes within the two directory structures. On some filesystems, the directory is not truncated if a file or subdirectory is deleted, leaving a directory node that is slightly larger than what's actually needed.



If you have, at some point, kept many files that were later deleted, this might have left large directory nodes.



Example:



$ mkdir dir
$ ls -ld dir
drwxr-xr-x 2 kk wheel 512 May 11 17:09 dir




$ touch dir/file-1..1000
$ ls -ld dir
drwxr-xr-x 2 kk wheel 20480 May 11 17:09 dir




$ rm dir/*
$ ls -ld dir
drwxr-xr-x 2 kk wheel 20480 May 11 17:09 dir
$ du -h .
20.0K ./dir
42.0K .
$ ls -R
dir

./dir:


Notice how, even though I deleted the 1000 files I created, the dir directory still uses 20 KB.






share|improve this answer























  • That is quite interesting, I also used du -bs and I was able to reproduce what you described. Would be interesting to know what ext4 does.

    – Stefano d'Antonio
    May 11 at 15:17











  • @Stefanod'Antonio I believe that ext4 behaves the same. My tests were on an OpenBSD system using its native FFS filesystem.

    – Kusalananda
    May 11 at 15:20












  • @Kusalanada what I meant is how this works behind the scenes: what's the threshold and why it does that.

    – Stefano d'Antonio
    May 11 at 15:21






  • 1





    @Stefanod'Antonio Possibly to reduce filesystem fragmentation. There is no threshold. The directory node is simply never truncated.

    – Kusalananda
    May 11 at 15:22






  • 1





    @danieldeveloper001 I'm not a Linux user and don't know if there's some specific tool for doing this on ext4 filesystems, but the portable way would be to move the contents of a directory to a new directory and then rmdir the original directory. Or, for a whole hierarchy, use rsync to copy it, then delete the original (as the user in the question actually did).

    – Kusalananda
    May 11 at 15:31














11












11








11







Since you have copied the files using rsync and then compared the two sets of files using diff, and since diff reports no difference, the two sets of files are identical.



The size difference can then probably be explained by the sizes of the actual directory nodes within the two directory structures. On some filesystems, the directory is not truncated if a file or subdirectory is deleted, leaving a directory node that is slightly larger than what's actually needed.



If you have, at some point, kept many files that were later deleted, this might have left large directory nodes.



Example:



$ mkdir dir
$ ls -ld dir
drwxr-xr-x 2 kk wheel 512 May 11 17:09 dir




$ touch dir/file-1..1000
$ ls -ld dir
drwxr-xr-x 2 kk wheel 20480 May 11 17:09 dir




$ rm dir/*
$ ls -ld dir
drwxr-xr-x 2 kk wheel 20480 May 11 17:09 dir
$ du -h .
20.0K ./dir
42.0K .
$ ls -R
dir

./dir:


Notice how, even though I deleted the 1000 files I created, the dir directory still uses 20 KB.






share|improve this answer













Since you have copied the files using rsync and then compared the two sets of files using diff, and since diff reports no difference, the two sets of files are identical.



The size difference can then probably be explained by the sizes of the actual directory nodes within the two directory structures. On some filesystems, the directory is not truncated if a file or subdirectory is deleted, leaving a directory node that is slightly larger than what's actually needed.



If you have, at some point, kept many files that were later deleted, this might have left large directory nodes.



Example:



$ mkdir dir
$ ls -ld dir
drwxr-xr-x 2 kk wheel 512 May 11 17:09 dir




$ touch dir/file-1..1000
$ ls -ld dir
drwxr-xr-x 2 kk wheel 20480 May 11 17:09 dir




$ rm dir/*
$ ls -ld dir
drwxr-xr-x 2 kk wheel 20480 May 11 17:09 dir
$ du -h .
20.0K ./dir
42.0K .
$ ls -R
dir

./dir:


Notice how, even though I deleted the 1000 files I created, the dir directory still uses 20 KB.







share|improve this answer












share|improve this answer



share|improve this answer










answered May 11 at 15:12









KusalanandaKusalananda

146k18277458




146k18277458












  • That is quite interesting, I also used du -bs and I was able to reproduce what you described. Would be interesting to know what ext4 does.

    – Stefano d'Antonio
    May 11 at 15:17











  • @Stefanod'Antonio I believe that ext4 behaves the same. My tests were on an OpenBSD system using its native FFS filesystem.

    – Kusalananda
    May 11 at 15:20












  • @Kusalanada what I meant is how this works behind the scenes: what's the threshold and why it does that.

    – Stefano d'Antonio
    May 11 at 15:21






  • 1





    @Stefanod'Antonio Possibly to reduce filesystem fragmentation. There is no threshold. The directory node is simply never truncated.

    – Kusalananda
    May 11 at 15:22






  • 1





    @danieldeveloper001 I'm not a Linux user and don't know if there's some specific tool for doing this on ext4 filesystems, but the portable way would be to move the contents of a directory to a new directory and then rmdir the original directory. Or, for a whole hierarchy, use rsync to copy it, then delete the original (as the user in the question actually did).

    – Kusalananda
    May 11 at 15:31


















  • That is quite interesting, I also used du -bs and I was able to reproduce what you described. Would be interesting to know what ext4 does.

    – Stefano d'Antonio
    May 11 at 15:17











  • @Stefanod'Antonio I believe that ext4 behaves the same. My tests were on an OpenBSD system using its native FFS filesystem.

    – Kusalananda
    May 11 at 15:20












  • @Kusalanada what I meant is how this works behind the scenes: what's the threshold and why it does that.

    – Stefano d'Antonio
    May 11 at 15:21






  • 1





    @Stefanod'Antonio Possibly to reduce filesystem fragmentation. There is no threshold. The directory node is simply never truncated.

    – Kusalananda
    May 11 at 15:22






  • 1





    @danieldeveloper001 I'm not a Linux user and don't know if there's some specific tool for doing this on ext4 filesystems, but the portable way would be to move the contents of a directory to a new directory and then rmdir the original directory. Or, for a whole hierarchy, use rsync to copy it, then delete the original (as the user in the question actually did).

    – Kusalananda
    May 11 at 15:31

















That is quite interesting, I also used du -bs and I was able to reproduce what you described. Would be interesting to know what ext4 does.

– Stefano d'Antonio
May 11 at 15:17





That is quite interesting, I also used du -bs and I was able to reproduce what you described. Would be interesting to know what ext4 does.

– Stefano d'Antonio
May 11 at 15:17













@Stefanod'Antonio I believe that ext4 behaves the same. My tests were on an OpenBSD system using its native FFS filesystem.

– Kusalananda
May 11 at 15:20






@Stefanod'Antonio I believe that ext4 behaves the same. My tests were on an OpenBSD system using its native FFS filesystem.

– Kusalananda
May 11 at 15:20














@Kusalanada what I meant is how this works behind the scenes: what's the threshold and why it does that.

– Stefano d'Antonio
May 11 at 15:21





@Kusalanada what I meant is how this works behind the scenes: what's the threshold and why it does that.

– Stefano d'Antonio
May 11 at 15:21




1




1





@Stefanod'Antonio Possibly to reduce filesystem fragmentation. There is no threshold. The directory node is simply never truncated.

– Kusalananda
May 11 at 15:22





@Stefanod'Antonio Possibly to reduce filesystem fragmentation. There is no threshold. The directory node is simply never truncated.

– Kusalananda
May 11 at 15:22




1




1





@danieldeveloper001 I'm not a Linux user and don't know if there's some specific tool for doing this on ext4 filesystems, but the portable way would be to move the contents of a directory to a new directory and then rmdir the original directory. Or, for a whole hierarchy, use rsync to copy it, then delete the original (as the user in the question actually did).

– Kusalananda
May 11 at 15:31






@danieldeveloper001 I'm not a Linux user and don't know if there's some specific tool for doing this on ext4 filesystems, but the portable way would be to move the contents of a directory to a new directory and then rmdir the original directory. Or, for a whole hierarchy, use rsync to copy it, then delete the original (as the user in the question actually did).

– Kusalananda
May 11 at 15:31














1














Have you checked the filesystem block size? Even though both devices use the same filesystem, it is a possibility that the block sizes are different and this could explain the "slightly different" sizes.



When, for instance, storing a bunch of 1KiB files in a device with filesystem set to use 8KiB block size, there will be a waste of 7KiB per used block. The actual size that your files are taking from your disk, is the size of the used blocks, not the size of the files itself in this case (unless there is some kind of tool to store multiple files per block). Try checking your different devices block size with the command below.



# blockdev --getbsz <DEVICE>





share|improve this answer








New contributor



danieldeveloper001 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.



















  • Should this not be covered by using --block-size=1 as a du option?

    – Stefano d'Antonio
    May 11 at 15:17











  • Just checked they both return 4096.

    – Stefano d'Antonio
    May 11 at 15:18











  • Sorry, started writing before your edit ;)

    – danieldeveloper001
    May 11 at 15:18











  • No worries, it was a good guess.

    – Stefano d'Antonio
    May 11 at 15:19















1














Have you checked the filesystem block size? Even though both devices use the same filesystem, it is a possibility that the block sizes are different and this could explain the "slightly different" sizes.



When, for instance, storing a bunch of 1KiB files in a device with filesystem set to use 8KiB block size, there will be a waste of 7KiB per used block. The actual size that your files are taking from your disk, is the size of the used blocks, not the size of the files itself in this case (unless there is some kind of tool to store multiple files per block). Try checking your different devices block size with the command below.



# blockdev --getbsz <DEVICE>





share|improve this answer








New contributor



danieldeveloper001 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.



















  • Should this not be covered by using --block-size=1 as a du option?

    – Stefano d'Antonio
    May 11 at 15:17











  • Just checked they both return 4096.

    – Stefano d'Antonio
    May 11 at 15:18











  • Sorry, started writing before your edit ;)

    – danieldeveloper001
    May 11 at 15:18











  • No worries, it was a good guess.

    – Stefano d'Antonio
    May 11 at 15:19













1












1








1







Have you checked the filesystem block size? Even though both devices use the same filesystem, it is a possibility that the block sizes are different and this could explain the "slightly different" sizes.



When, for instance, storing a bunch of 1KiB files in a device with filesystem set to use 8KiB block size, there will be a waste of 7KiB per used block. The actual size that your files are taking from your disk, is the size of the used blocks, not the size of the files itself in this case (unless there is some kind of tool to store multiple files per block). Try checking your different devices block size with the command below.



# blockdev --getbsz <DEVICE>





share|improve this answer








New contributor



danieldeveloper001 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









Have you checked the filesystem block size? Even though both devices use the same filesystem, it is a possibility that the block sizes are different and this could explain the "slightly different" sizes.



When, for instance, storing a bunch of 1KiB files in a device with filesystem set to use 8KiB block size, there will be a waste of 7KiB per used block. The actual size that your files are taking from your disk, is the size of the used blocks, not the size of the files itself in this case (unless there is some kind of tool to store multiple files per block). Try checking your different devices block size with the command below.



# blockdev --getbsz <DEVICE>






share|improve this answer








New contributor



danieldeveloper001 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








share|improve this answer



share|improve this answer






New contributor



danieldeveloper001 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








answered May 11 at 15:16









danieldeveloper001danieldeveloper001

1796




1796




New contributor



danieldeveloper001 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




New contributor




danieldeveloper001 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.














  • Should this not be covered by using --block-size=1 as a du option?

    – Stefano d'Antonio
    May 11 at 15:17











  • Just checked they both return 4096.

    – Stefano d'Antonio
    May 11 at 15:18











  • Sorry, started writing before your edit ;)

    – danieldeveloper001
    May 11 at 15:18











  • No worries, it was a good guess.

    – Stefano d'Antonio
    May 11 at 15:19

















  • Should this not be covered by using --block-size=1 as a du option?

    – Stefano d'Antonio
    May 11 at 15:17











  • Just checked they both return 4096.

    – Stefano d'Antonio
    May 11 at 15:18











  • Sorry, started writing before your edit ;)

    – danieldeveloper001
    May 11 at 15:18











  • No worries, it was a good guess.

    – Stefano d'Antonio
    May 11 at 15:19
















Should this not be covered by using --block-size=1 as a du option?

– Stefano d'Antonio
May 11 at 15:17





Should this not be covered by using --block-size=1 as a du option?

– Stefano d'Antonio
May 11 at 15:17













Just checked they both return 4096.

– Stefano d'Antonio
May 11 at 15:18





Just checked they both return 4096.

– Stefano d'Antonio
May 11 at 15:18













Sorry, started writing before your edit ;)

– danieldeveloper001
May 11 at 15:18





Sorry, started writing before your edit ;)

– danieldeveloper001
May 11 at 15:18













No worries, it was a good guess.

– Stefano d'Antonio
May 11 at 15:19





No worries, it was a good guess.

– Stefano d'Antonio
May 11 at 15:19

















draft saved

draft discarded
















































Thanks for contributing an answer to Unix & Linux Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f518424%2fsize-of-a-folder-with-du%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Category:9 (number) SubcategoriesMedia in category "9 (number)"Navigation menuUpload mediaGND ID: 4485639-8Library of Congress authority ID: sh85091979ReasonatorScholiaStatistics

Circuit construction for execution of conditional statements using least significant bitHow are two different registers being used as “control”?How exactly is the stated composite state of the two registers being produced using the $R_zz$ controlled rotations?Efficiently performing controlled rotations in HHLWould this quantum algorithm implementation work?How to prepare a superposed states of odd integers from $1$ to $sqrtN$?Why is this implementation of the order finding algorithm not working?Circuit construction for Hamiltonian simulationHow can I invert the least significant bit of a certain term of a superposed state?Implementing an oracleImplementing a controlled sum operation

Magento 2 “No Payment Methods” in Admin New OrderHow to integrate Paypal Express Checkout with the Magento APIMagento 1.5 - Sales > Order > edit order and shipping methods disappearAuto Invoice Check/Money Order Payment methodAdd more simple payment methods?Shipping methods not showingWhat should I do to change payment methods if changing the configuration has no effects?1.9 - No Payment Methods showing upMy Payment Methods not Showing for downloadable/virtual product when checkout?Magento2 API to access internal payment methodHow to call an existing payment methods in the registration form?