Is `curl something | sudo bash -` a reasonably safe installation method?Resources explaining why `curl | bash` and similar installation instructions are a security hazard?Is it safe to ask users to curl a raw file from GitHub?Is it safe to use .netrc files to store credentials for tools like curl or ftp?Can a curl request to an arbitrary url made sufficiently safe?Why is it considered safe to install something as a non-root user in Linux environments?Is there a criteria for allowing or disallowing the execution of bash scripts as root with sudo?Resources explaining why `curl | bash` and similar installation instructions are a security hazard?Is an asterisk in sudo command specifications safe?
What is the metal bit in the front of this propeller spinner?
Claiming statutory warranty for a fault that resulted in the loss of the product
Why does airflow separate from the wing during stall?
What are "the high ends of castles" called?
How do you structure large embedded projects?
Which dice game has a board with 9x9 squares that has different colors on the diagonals and midway on some edges?
MITM on HTTPS traffic in Kazakhstan 2019
Plotting maxima within a simplex
Is it ethical to tell my teaching assistant that I like him?
What should I watch before playing Alien: Isolation?
Can "Taking algebraic closure" be made into a functor?
Monday's Blocking Donimoes Problem
Strange LED behavior
Trivial non-dark twist in dark fantasy
On a Gameboy, what happens when attempting to read/write external RAM while RAM is disabled?
How did pilots avoid thunderstorms and related weather before “reliable” airborne weather radar was introduced on airliners?
Found more old paper shares from broken up companies
How to create quantum circuits from scratch
"It is what it is"
Can a creature sustain itself by eating its own severed body parts?
what does the term highest qualification mean?
Why is there an extra "t" in Lemmatization?
Considerations when providing money to only one child out of two
I have a domain, static IP and many devices I'd like to access outside my house. How to route them?
Is `curl something | sudo bash -` a reasonably safe installation method?
Resources explaining why `curl | bash` and similar installation instructions are a security hazard?Is it safe to ask users to curl a raw file from GitHub?Is it safe to use .netrc files to store credentials for tools like curl or ftp?Can a curl request to an arbitrary url made sufficiently safe?Why is it considered safe to install something as a non-root user in Linux environments?Is there a criteria for allowing or disallowing the execution of bash scripts as root with sudo?Resources explaining why `curl | bash` and similar installation instructions are a security hazard?Is an asterisk in sudo command specifications safe?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
The most straightforward way to install NodeJS on Ubuntu or Debian seems to be Nodesource, whose installation instructions say to run:
curl -sL https://deb.nodesource.com/setup_12.x | sudo -E bash -
This clashes with some basic security rules I learned long ago, such as "be suspicious of downloads" and "be cautious with sudo". However, I learned those rules long ago, and nowadays it seems like everyone is doing this...well, at least it has 350 upvotes on askubuntu.com.
As I read various opinions on other sites, I'm finding that some people also think curl-pipe-sudo-bash is unsafe:
- Phil. (idontplaydarts.com, 2016-04-19) Detecting the use of "curl | bash" server side
- Stemm, Mark. (Sysdig.com, 2016-06-13) Friends don't let friends Curl | Bash.
Stackoverflow.com. (2015-04-01 and onward) Why using curl | sudo sh is not advised? (also linked from askubuntu)
while some people think it's just as safe as any other practical installation method:
- McLellan, Bryan. (Github.com/btm, 2013-09-25) Why curl | sudo bash is good.
- YCombinator.com. (2016-10-22 and onward) "Curl Bash piping" wall of shame.
Varda, Kenton. (Sandstorm.io, 2015-09-24) Is curl|bash insecure?.
There are also some that explore the problem without giving a decisive opinion:
- Granquist, Lamont. (Chef.io, 2015-07-16) 5 Ways to Deal With the install.sh Curl Pipe Bash problem.
Since there's no clear consensus from other sites, I'm asking here: Is curl-pipe-sudo-bash a reasonably safe installation method, or does it carry unnecessary risks that can be avoided by some other method?
curl sudo install
|
show 2 more comments
The most straightforward way to install NodeJS on Ubuntu or Debian seems to be Nodesource, whose installation instructions say to run:
curl -sL https://deb.nodesource.com/setup_12.x | sudo -E bash -
This clashes with some basic security rules I learned long ago, such as "be suspicious of downloads" and "be cautious with sudo". However, I learned those rules long ago, and nowadays it seems like everyone is doing this...well, at least it has 350 upvotes on askubuntu.com.
As I read various opinions on other sites, I'm finding that some people also think curl-pipe-sudo-bash is unsafe:
- Phil. (idontplaydarts.com, 2016-04-19) Detecting the use of "curl | bash" server side
- Stemm, Mark. (Sysdig.com, 2016-06-13) Friends don't let friends Curl | Bash.
Stackoverflow.com. (2015-04-01 and onward) Why using curl | sudo sh is not advised? (also linked from askubuntu)
while some people think it's just as safe as any other practical installation method:
- McLellan, Bryan. (Github.com/btm, 2013-09-25) Why curl | sudo bash is good.
- YCombinator.com. (2016-10-22 and onward) "Curl Bash piping" wall of shame.
Varda, Kenton. (Sandstorm.io, 2015-09-24) Is curl|bash insecure?.
There are also some that explore the problem without giving a decisive opinion:
- Granquist, Lamont. (Chef.io, 2015-07-16) 5 Ways to Deal With the install.sh Curl Pipe Bash problem.
Since there's no clear consensus from other sites, I'm asking here: Is curl-pipe-sudo-bash a reasonably safe installation method, or does it carry unnecessary risks that can be avoided by some other method?
curl sudo install
3
This makes you trust the server you downloaded from -- note that normally, you don't need to trust the server, because if you're downloading an RPM or deb from your distro, it's signed, so you can just trust the signature to ensure that you have a genuine package even if an attacker controls the mirror/server you downloaded it from, or if that attacker controls your ISP and is substituting their own host, etc.
– Charles Duffy
Jul 13 at 16:03
3
Note too that it's very possible to detect whether code is being piped to bash (via timing analysis), so folks can give different download results for code being saved for inspection vs code being run directly.
– Charles Duffy
Jul 13 at 16:05
3
blog.taz.net.au/2018/03/07/brawndo-installer - it's got what users crave.alias brawndo='curl $1 | sudo bash'
– cas
Jul 15 at 4:13
1
While you asked specifically regarding "safety" (and this is security.se after all), I'd like to mention that there might be other interesting factors besides safety when evaluating an installation method (examples: can you find out later what was installed? Can you uninstall easily and reliably? Are you notified about security updates? Can you install different versions of the same software on one system?)
– oliver
Jul 15 at 12:50
2
@cas,curl $1
doesn't look at an argument to thebrawndo
alias, it looks at your shell's current argument list, which for interactive shells is usually empty. You probably want a function:brawndo() sudo bash;
-- or, to pass arguments past the first to the received script:brawndo() sudo bash -s "$@";
(of course, all that is said with my shell hat on; with my security hat, don't do any of this).
– Charles Duffy
Jul 15 at 15:00
|
show 2 more comments
The most straightforward way to install NodeJS on Ubuntu or Debian seems to be Nodesource, whose installation instructions say to run:
curl -sL https://deb.nodesource.com/setup_12.x | sudo -E bash -
This clashes with some basic security rules I learned long ago, such as "be suspicious of downloads" and "be cautious with sudo". However, I learned those rules long ago, and nowadays it seems like everyone is doing this...well, at least it has 350 upvotes on askubuntu.com.
As I read various opinions on other sites, I'm finding that some people also think curl-pipe-sudo-bash is unsafe:
- Phil. (idontplaydarts.com, 2016-04-19) Detecting the use of "curl | bash" server side
- Stemm, Mark. (Sysdig.com, 2016-06-13) Friends don't let friends Curl | Bash.
Stackoverflow.com. (2015-04-01 and onward) Why using curl | sudo sh is not advised? (also linked from askubuntu)
while some people think it's just as safe as any other practical installation method:
- McLellan, Bryan. (Github.com/btm, 2013-09-25) Why curl | sudo bash is good.
- YCombinator.com. (2016-10-22 and onward) "Curl Bash piping" wall of shame.
Varda, Kenton. (Sandstorm.io, 2015-09-24) Is curl|bash insecure?.
There are also some that explore the problem without giving a decisive opinion:
- Granquist, Lamont. (Chef.io, 2015-07-16) 5 Ways to Deal With the install.sh Curl Pipe Bash problem.
Since there's no clear consensus from other sites, I'm asking here: Is curl-pipe-sudo-bash a reasonably safe installation method, or does it carry unnecessary risks that can be avoided by some other method?
curl sudo install
The most straightforward way to install NodeJS on Ubuntu or Debian seems to be Nodesource, whose installation instructions say to run:
curl -sL https://deb.nodesource.com/setup_12.x | sudo -E bash -
This clashes with some basic security rules I learned long ago, such as "be suspicious of downloads" and "be cautious with sudo". However, I learned those rules long ago, and nowadays it seems like everyone is doing this...well, at least it has 350 upvotes on askubuntu.com.
As I read various opinions on other sites, I'm finding that some people also think curl-pipe-sudo-bash is unsafe:
- Phil. (idontplaydarts.com, 2016-04-19) Detecting the use of "curl | bash" server side
- Stemm, Mark. (Sysdig.com, 2016-06-13) Friends don't let friends Curl | Bash.
Stackoverflow.com. (2015-04-01 and onward) Why using curl | sudo sh is not advised? (also linked from askubuntu)
while some people think it's just as safe as any other practical installation method:
- McLellan, Bryan. (Github.com/btm, 2013-09-25) Why curl | sudo bash is good.
- YCombinator.com. (2016-10-22 and onward) "Curl Bash piping" wall of shame.
Varda, Kenton. (Sandstorm.io, 2015-09-24) Is curl|bash insecure?.
There are also some that explore the problem without giving a decisive opinion:
- Granquist, Lamont. (Chef.io, 2015-07-16) 5 Ways to Deal With the install.sh Curl Pipe Bash problem.
Since there's no clear consensus from other sites, I'm asking here: Is curl-pipe-sudo-bash a reasonably safe installation method, or does it carry unnecessary risks that can be avoided by some other method?
curl sudo install
curl sudo install
asked Jul 12 at 18:13
KruboKrubo
3893 silver badges7 bronze badges
3893 silver badges7 bronze badges
3
This makes you trust the server you downloaded from -- note that normally, you don't need to trust the server, because if you're downloading an RPM or deb from your distro, it's signed, so you can just trust the signature to ensure that you have a genuine package even if an attacker controls the mirror/server you downloaded it from, or if that attacker controls your ISP and is substituting their own host, etc.
– Charles Duffy
Jul 13 at 16:03
3
Note too that it's very possible to detect whether code is being piped to bash (via timing analysis), so folks can give different download results for code being saved for inspection vs code being run directly.
– Charles Duffy
Jul 13 at 16:05
3
blog.taz.net.au/2018/03/07/brawndo-installer - it's got what users crave.alias brawndo='curl $1 | sudo bash'
– cas
Jul 15 at 4:13
1
While you asked specifically regarding "safety" (and this is security.se after all), I'd like to mention that there might be other interesting factors besides safety when evaluating an installation method (examples: can you find out later what was installed? Can you uninstall easily and reliably? Are you notified about security updates? Can you install different versions of the same software on one system?)
– oliver
Jul 15 at 12:50
2
@cas,curl $1
doesn't look at an argument to thebrawndo
alias, it looks at your shell's current argument list, which for interactive shells is usually empty. You probably want a function:brawndo() sudo bash;
-- or, to pass arguments past the first to the received script:brawndo() sudo bash -s "$@";
(of course, all that is said with my shell hat on; with my security hat, don't do any of this).
– Charles Duffy
Jul 15 at 15:00
|
show 2 more comments
3
This makes you trust the server you downloaded from -- note that normally, you don't need to trust the server, because if you're downloading an RPM or deb from your distro, it's signed, so you can just trust the signature to ensure that you have a genuine package even if an attacker controls the mirror/server you downloaded it from, or if that attacker controls your ISP and is substituting their own host, etc.
– Charles Duffy
Jul 13 at 16:03
3
Note too that it's very possible to detect whether code is being piped to bash (via timing analysis), so folks can give different download results for code being saved for inspection vs code being run directly.
– Charles Duffy
Jul 13 at 16:05
3
blog.taz.net.au/2018/03/07/brawndo-installer - it's got what users crave.alias brawndo='curl $1 | sudo bash'
– cas
Jul 15 at 4:13
1
While you asked specifically regarding "safety" (and this is security.se after all), I'd like to mention that there might be other interesting factors besides safety when evaluating an installation method (examples: can you find out later what was installed? Can you uninstall easily and reliably? Are you notified about security updates? Can you install different versions of the same software on one system?)
– oliver
Jul 15 at 12:50
2
@cas,curl $1
doesn't look at an argument to thebrawndo
alias, it looks at your shell's current argument list, which for interactive shells is usually empty. You probably want a function:brawndo() sudo bash;
-- or, to pass arguments past the first to the received script:brawndo() sudo bash -s "$@";
(of course, all that is said with my shell hat on; with my security hat, don't do any of this).
– Charles Duffy
Jul 15 at 15:00
3
3
This makes you trust the server you downloaded from -- note that normally, you don't need to trust the server, because if you're downloading an RPM or deb from your distro, it's signed, so you can just trust the signature to ensure that you have a genuine package even if an attacker controls the mirror/server you downloaded it from, or if that attacker controls your ISP and is substituting their own host, etc.
– Charles Duffy
Jul 13 at 16:03
This makes you trust the server you downloaded from -- note that normally, you don't need to trust the server, because if you're downloading an RPM or deb from your distro, it's signed, so you can just trust the signature to ensure that you have a genuine package even if an attacker controls the mirror/server you downloaded it from, or if that attacker controls your ISP and is substituting their own host, etc.
– Charles Duffy
Jul 13 at 16:03
3
3
Note too that it's very possible to detect whether code is being piped to bash (via timing analysis), so folks can give different download results for code being saved for inspection vs code being run directly.
– Charles Duffy
Jul 13 at 16:05
Note too that it's very possible to detect whether code is being piped to bash (via timing analysis), so folks can give different download results for code being saved for inspection vs code being run directly.
– Charles Duffy
Jul 13 at 16:05
3
3
blog.taz.net.au/2018/03/07/brawndo-installer - it's got what users crave.
alias brawndo='curl $1 | sudo bash'
– cas
Jul 15 at 4:13
blog.taz.net.au/2018/03/07/brawndo-installer - it's got what users crave.
alias brawndo='curl $1 | sudo bash'
– cas
Jul 15 at 4:13
1
1
While you asked specifically regarding "safety" (and this is security.se after all), I'd like to mention that there might be other interesting factors besides safety when evaluating an installation method (examples: can you find out later what was installed? Can you uninstall easily and reliably? Are you notified about security updates? Can you install different versions of the same software on one system?)
– oliver
Jul 15 at 12:50
While you asked specifically regarding "safety" (and this is security.se after all), I'd like to mention that there might be other interesting factors besides safety when evaluating an installation method (examples: can you find out later what was installed? Can you uninstall easily and reliably? Are you notified about security updates? Can you install different versions of the same software on one system?)
– oliver
Jul 15 at 12:50
2
2
@cas,
curl $1
doesn't look at an argument to the brawndo
alias, it looks at your shell's current argument list, which for interactive shells is usually empty. You probably want a function: brawndo() sudo bash;
-- or, to pass arguments past the first to the received script: brawndo() sudo bash -s "$@";
(of course, all that is said with my shell hat on; with my security hat, don't do any of this).– Charles Duffy
Jul 15 at 15:00
@cas,
curl $1
doesn't look at an argument to the brawndo
alias, it looks at your shell's current argument list, which for interactive shells is usually empty. You probably want a function: brawndo() sudo bash;
-- or, to pass arguments past the first to the received script: brawndo() sudo bash -s "$@";
(of course, all that is said with my shell hat on; with my security hat, don't do any of this).– Charles Duffy
Jul 15 at 15:00
|
show 2 more comments
6 Answers
6
active
oldest
votes
It it's about as safe as any other standard1 installation method as long as you:
- Use HTTPS (and reject certificate errors)
- Are confident in your certificate trust store
- Trust the server you're downloading from
You can, and should, separate the steps out -- download the script2, inspect it, and see if it's doing anything fishy before running the script you downloaded3. This is a good idea. It won't hurt anything if you do it and you might catch a compromise, which you can report to the source and the community at large. Be prepared to dig through quite a lot of Bash, if my experience with such things is any indicator. You can also try 'expanding' it, downloading any scripts that it would separately and tweaking the script to call those, if you're particularly worried about evil servers, but at some point you have to decide to just use a different server if you trust the first one so little.
Be aware that if the server (deb.nodesource.com
) is compromised, you basically have no recourse. Many package managers offer to verify GPG signatures on packages, and even though a fundamental part of the keysigning architecture is broken, this does still by and large work. You can manually specify the CA for wget and curl, though this only proves you're really connecting to that server, not that the server is serving safe code or that it's legitimate code from the creators.4
If you're worried about arbitrary code execution, APT definitely allows that, and I'm fairly confident both Homebrew and Yum do as well. So comparatively, it's not unsafe. This method allows greater visibility; you know precisely what's happening: A file is being downloaded, and then interpreted by Bash as a script. Odds are good you have enough knowledge already to start investigating the script. At worst, the Bash may call another language you don't know, or download and run a compiled executable, but even those actions can be noticed beforehand and, if you're so inclined, investigated.
As a side note, given that a lot of the time you need to install things with sudo
, I don't see its use here as any special concern. It's mildly disconcerting, yes, but no moreso than sudo apt install ...
.
1: There are significantly safer package managers, of course -- I'm only talking about standard ones like APT and yum.
2: ...while being careful with your copy/pastes, naturally. If you don't know why you should be careful with your copy/pastes, consider this HTML: Use this command: <code>echo 'Hello<span style="font-size: 0">, Evil</span>!'</code>
. To be safe, try pasting into a (GUI) text editor, and ensure you copied what you think you did. If you didn't, then stop trusting that server immediately.
3: You can actually detect whether the script is just being downloaded or being downloaded-and-executed, because interpreting a script with Bash takes a different amount of time than saving it to a file, and Linux's pipe system can "back up", which can make those timing differences visible to the server. If you ran the exact curl | sudo bash
command they gave, your examination is (at least if it's a malicious server...) meaningless.
4: Then again, it looks like NodeSource is creating some sort of custom installer, which wouldn't be signed by the Node team anyway, so... I'm not convinced that it's less safe in this particular case.
4
Upvoted, but you did miss a few important considerations. 1) Make sure the source of the download is trustworthy (not some fly-by-night domain - HTTPS is free these days, and never did mean a domain wasn't malicious - or a writable file in some cloud or anything). 2) Bear in mind that you're trusting the server absolutely, which is not necessary. Linux package managers (for example) usually support and sometimes require a GPG signature or similar, so even if somebody compromised the server and replaced the package, it would get rejected. Bash has no such protection.
– CBHacking
Jul 12 at 23:47
3
@CBHacking: Most Linux distros ships with a default set of trusted keys roots for the package managers. Even though this usually uses GPG, the trust mechanism is set up essentially like a PKI. Using PKI wouldn't really be more secure.
– Lie Ryan
Jul 13 at 12:25
1
As a reminder, you should be careful if you're copying text into your terminal: thejh.net/misc/website-terminal-copy-paste ; Also IDN homograph attacks might make it a bit more difficult to verify server identity.
– Larkeith
Jul 13 at 23:29
2
@NicHartley, ...also, note that rpm and dpkg aren't the only competing package formats out there. Consider Nix (wherein all builds are run in a networkless sandbox with only access to their declared dependencies; wherein packages aren't allowed to create setuid files; wherein all software is addressed by a hash of its sources, dependencies and build steps) as an alternative that does far better than any of them, and thus far, far better than thecurl | bash
travesty.
– Charles Duffy
Jul 14 at 13:27
1
@NicHartley, ...I've put an answer together; your feedback on what I covered/missed/probably should have left out would be welcome.
– Charles Duffy
Jul 15 at 12:12
|
show 7 more comments
There are three major security features you'd want to look at when
comparing curl ... | bash
installation to a Unix distribution
packaging system like apt
or yum
.
The first is ensuring that you are requesting the correct file(s). Apt
does this by keeping its own mapping of package names to more complex
URLs; the OCaml package manager is just opam
offering fairly easy
verification. By contrast, if I use opam's curl/shell installation
method, I need to verify the URLhttps://raw.githubusercontent.com/ocaml/opam/master/shell/install.sh
,
using my personal knowledge that raw.githubusercontent.com
is a
well-run site (owned and run by GitHub) that unlikely to have its
certificate compromised, that it is indeed the correct site for
downloading raw content from GitHub projects, the ocaml
GitHub
account is indeed the vendor whose software I want to install, andopam/master/shell/install.sh
is the correct path to the software I
want. This isn't terribly difficult, but you can see the opportunities
for human error here (as compared to verifying apt-get install opam
)
and how they could be magnified with even less clear sites and URLs.
In this particular case, too, an independent compromise of either of
the two vendors above (GitHub and the OCaml project) could compromise
the download without the other being able to do much about it.
The second security feature is confirming that the file you got is
actually the correct one for the name above. The curl/shell method
relies solely on the security provided by HTTPS, which could be
compromised on the server side (unlikely so long as the server
operator takes great care) and on the client side (far more frequent
than you'd think in this age of TLS interception). By contrast, apt
generally downloads via HTTP (thus rendering the entire TLS PKI
irrelevant) and checks the integrity of downloads via a PGP signature,
which is considerably easier to secure (because the secret keys don't
need to be online, etc.).
The third is ensuring that, when you have the correct files from the
vendor, that the vendor itself is not distributing malicious files.
This comes down to how reliable the vendor's packaging and review
processes are. In particular, I'd tend to trust the official Debian or
Ubuntu teams that sign release packages to have produced a
better-vetted product, both because that's the primary job of those
teams and because they're doing an extra layer of review on top of
what the upstream vendor did.
There's also an additional sometimes-valuable feature provided by
packaging systems such as apt that may or may not be provided by
systems using the curl/shell install procedure: audit of installed
files. Because apt, yum, etc. have hashes for most of the files
supplied by a package, it's possible to check an existing package
installation (using programs such as debsums
or rpm -v
) to see if
any of those installed files have been modified.
The curl/shell install method can offer a couple of potential
advantages over using a packaging system such as apt
or yum
:
You're generally getting a much more recent version of the software
and, especially if it's a packging system itself (such as pip or
Haskell Stack) it may do regular checks (when used) to see if it's
up-to-date and offer an update system.Some systems allow you to do a non-root (i.e., in your home
directory, owned by you) install of the software. For example,
while theopam
binary installed by the aboveinstall.sh
is put
into/usr/local/bin/
by default (requiring sudo access on many
systems), there's no reason you can't put it in~/.local/bin/
or
similar, thus never giving the install script or subsequent
software any root access at all. This has the advantage of ensuring
that root compromise is avoided, though it does make it easier for
later software runs to compromise the installed version of the
software that you're using.
1
You have missed one disadvantage: your local package manager doesn’t know about software installed this way. So any automated checks/downloads for patched versions won’t work.
– Gaius
Jul 15 at 10:30
Actually, many software systems where this is both a recommended and typical installation know how to do their own update checks and updates. Often they are (usual language-specific) packaging systems themselves (e.g., pip, rvm, Haskell Stack). But this is certainly something to check and keep in mind for whatever particular system you install this way!
– Curt J. Sampson
Jul 15 at 13:52
add a comment |
Submitting an answer to my own question. Not sure if this is the best answer, but I'm hoping other answers will address these points.
curl something | sudo bash -
on Linux is equally safe as downloading something on Windows and right-clicking run as administrator. One can argue that this is 'reasonably safe', but as a recent xkcd suggests, nobody really knows how bad computer security is these days. In any event, this method is NOT as safe as other installation methods.
All safer methods include a step to verify the download integrity before installing anything, and there's no good reason to skip this step. Installers like apt
have some form of this step built in. The goal is to ensure that what you have downloaded is what the publisher intended. This doesn't guarantee that the software is free of its own vulnerabilities, but it should at least protect against simple attacks that replace the download with malware. The essence is simply to verify the MD5 and SHA256 checksums posted by the software publisher. Some further improvements are possible:
- It's better to get these checksums via a different network path, such as by calling a friend in another country, which would protect against MITM attacks.
- It's better to get the checksums at least a day earlier/later, which would protect in case the publisher's website was briefly taken over but the takeover was stopped within a day.
- It's better to verify the checksums themselves using GPG, which would protect in case the publisher's website was compromised but their GPG private key wasn't.
One side comment: Some sites say you should download the sh
script and then inspect it before running it. Unfortunately, this gives a false sense of security unless you vet the script with a practically impossible level of precision. The shell script is probably a few hundred lines, and very tiny changes (such as an obfuscated one-character change to a URL) can convert a shell script into a malware installer.
2
"equally safe as downloading something on Windows and right-clicking run as administrator" - While that comparison may be true, it's not common for Windows software vendors to instruct users to start their installer that way.
– aroth
Jul 14 at 12:21
@aroth Good point: Even Windows vendors are moving to (somewhat) safer install methods nowadays.
– Krubo
Jul 15 at 2:34
add a comment |
"Reasonably Safe" depends on your goalposts, but curl | bash
is well behind state-of-the-art.
Let's take a look at the kind of verification one might want:
- Ensuring that someone malicious at your ISP can't do a man-in-the-middle to feed you arbitrary code.
- Ensuring that you're getting the same binaries the author published
- Ensuring you're getting the same binaries that someone downloading the same filename also got.
- Ensuring that the binaries you download reflect a specific, auditable set of sources and build steps, and can be reproduced from same.
- Separating installing software from running software -- if you're installing software to be run by an untrusted, low-privileged user, no high-privileged account should be put at risk in the process.
With curl | sudo bash
, you get only the first if that; with rpm
or dpkg
you get some of them; with nix
, you can get all of them.
Using
curl
to download viahttps
, you have some safety against a man-in-the-middle attacker, insofar as that attacker can't forge a certificate and key that's valid for the remote site. (You don't have safety against an attacker who broke into the remote server, or one who has access to the local CA your company put into all CA store lists on corporate-owned-hardware so they could MITM outgoing SSL connections for intentional "security" purposes!).This is the only threat model
curl | sudo bash
sometimes is successful at protecting you against.Ensuring that you're getting the same binaries the author published can be done with a digital signature by that author (Linux distributions typically distribute a keychain of OpenPGP keys belonging to individuals authorized to publish packages to that distribution, or have a key they use for packages they built themselves, and use access control measures to restrict which authors are able to get packages into their build systems).
Deployed correctly,
rpm
ordpkg
gives you this safety;curl | bash
does not.Ensuring that requesting the same name always returns the same binaries is trickier, if an authorized author's key could have been captured. This can be accomplished, however, if the content you're downloading is hash-addressed; to publish different content under the same name, an attacker would need to either decouple the inputs from the hash from the file's contents (trivially detected if it's the hash of the binary that's published.
Moving to hash-addressed build publication has two possible approaches:
If the hash is of the outputs of the build, an attacker's easiest approach is to find the mechanism by which the end-user looked up that hash and replace it with a malicious value -- so the point-of-attack moves, but the vulnerability itself does not.
If the hash is of the inputs to the build, checking that the output of the build genuinely matches those inputs requires more work (namely, rerunning the build!) to be done to check, but that check becomes far harder to evade.
The latter approach is the better one, even though it's expensive to check and puts extra work on the folks doing software packaging (to deal with any places the author of a program folded in timestamps or other non-reproducible elements to build process itself).
Dealing with malicious authors is not in the security model that
rpm
ordpkg
tries to address, and of course,curl | bash
doesn't do anything about it either.Separating installation from runtime is a matter of designing the serialization format up-front without dangerous features -- not supporting
setuid
orsetgid
bits, not supporting install-time unsandboxed run scripts with arbitrary code, etc.curl | sudo bash
gives you no protection here, butrpm
anddpkg
also don't.nix
, by contrast, lets any unprivileged user install software into the store -- but the NAR serialization format it uses won't represent setuid or setgid bits, that content in the store is unreferenced by any user account that doesn't explicitly request it or a piece of software that depends on it, and cases where software needssetuid
privileges to be installed require explicit out-of-band administrative action before those bits actually get set.Only oddball, niche, specialty software installation methods like
nix
get this right.
Keep in mind that unless the keys for the specific project come with your distro -- not impossible, obviously, but not necessarily likely -- you'll have to get them from somewhere. That "somewhere" currently defaults to a broken keyserver network. It would be worth mentioning other trusted methods. Also, you can effectively client-side pin a specific certificate withcurl
, to remove that one attack (not the others). Finally,curl | bash
is more likely to be up-to-date than anything from a package manager, precisely because it's so uncontrolled. Probably worth a mention if nothing else.
– Nic Hartley
Jul 15 at 14:15
Aside from those, which amount to minor nitpicks, I like this answer for offering a safer alternative to even "normal" installation methods, which even the currently highest-voted answer doesn't do. If only the author could have figured out how to incorporate smoothly.
– Nic Hartley
Jul 15 at 14:16
Oh oops, you do address my first point, I just missed that paragraph. Sorry. Ignore that bit.
– Nic Hartley
Jul 15 at 14:19
1
I do agree that keeping things up-to-date is a pain; the backlog of PRs awaiting review/merge to nixpkgs is extensive, and the (very!) high bar to getting a commit bit helps keep it that way.
– Charles Duffy
Jul 15 at 15:07
add a comment |
One option would be to attempt to do behavioural analysis of what the results are, by running the curl command separately to fetch a copy of whatever the script is.
Then run it in a linux vm and watch what connections out happen etc, you could even run file integrity monitoring on the system and see what's altered when it runs.
Ultimately, the context is important that this behaviour could lead to compromise, but isn't especially worse than many of the other methods by which people get software. Even with the behavioural analysis I mentioned above, you're limited by the secondary sources the script may retrieve from, which could be dynamic too - but so are the dependencies of real software, so at some level, you have to rely on trust of the source to not link something bad.
1
Running curl separately won't necessarily give the same result. There's been software developed to detect whether there's a shell on the other end by looking at how quickly chunks of the script are retrieved by the downloading end, and inject malicious code at the end of the file if-and-only-if the timing patterns look like it's being directly piped to a shell; see idontplaydarts.com/2016/04/detecting-curl-pipe-bash-server-side
– Charles Duffy
Jul 14 at 11:48
@CharlesDuffy If you run the script in the VM, then run the script you downloaded again on your main machine, assuming they can't detect a VM then it should be safe. (Big assumption, I know, but at some point you should just stop trusting that server and find another installation method...)
– Nic Hartley
Jul 14 at 17:01
1
You may avoid things due to careless scripts, but a malicious one will not make obvious that it is doing malicious things. You will just have the rootkit in both the VM and your system afterwards, e.g. when it activates itself with a delay of three days, waits for incoming connections instead of creating connections itself or uses other hiding techniques. It could be worth trying tools like maybe, but note that they may not provide the best security (i.e. their sandbox is not be perfect) either.
– allo
Jul 15 at 8:47
add a comment |
No, it's not as safe. Your download can fail in the middle.
If your download fails in the middle then you'll have run a partial script, which can potentially fail to do some operations that it was supposed to do (cleanup, configuration, etc.).
It's not likely if the script is small or your connection is fast, but it's possible, especially on a slow connection.
This is an example of the difference between safety and security. :)
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "162"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsecurity.stackexchange.com%2fquestions%2f213401%2fis-curl-something-sudo-bash-a-reasonably-safe-installation-method%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
6 Answers
6
active
oldest
votes
6 Answers
6
active
oldest
votes
active
oldest
votes
active
oldest
votes
It it's about as safe as any other standard1 installation method as long as you:
- Use HTTPS (and reject certificate errors)
- Are confident in your certificate trust store
- Trust the server you're downloading from
You can, and should, separate the steps out -- download the script2, inspect it, and see if it's doing anything fishy before running the script you downloaded3. This is a good idea. It won't hurt anything if you do it and you might catch a compromise, which you can report to the source and the community at large. Be prepared to dig through quite a lot of Bash, if my experience with such things is any indicator. You can also try 'expanding' it, downloading any scripts that it would separately and tweaking the script to call those, if you're particularly worried about evil servers, but at some point you have to decide to just use a different server if you trust the first one so little.
Be aware that if the server (deb.nodesource.com
) is compromised, you basically have no recourse. Many package managers offer to verify GPG signatures on packages, and even though a fundamental part of the keysigning architecture is broken, this does still by and large work. You can manually specify the CA for wget and curl, though this only proves you're really connecting to that server, not that the server is serving safe code or that it's legitimate code from the creators.4
If you're worried about arbitrary code execution, APT definitely allows that, and I'm fairly confident both Homebrew and Yum do as well. So comparatively, it's not unsafe. This method allows greater visibility; you know precisely what's happening: A file is being downloaded, and then interpreted by Bash as a script. Odds are good you have enough knowledge already to start investigating the script. At worst, the Bash may call another language you don't know, or download and run a compiled executable, but even those actions can be noticed beforehand and, if you're so inclined, investigated.
As a side note, given that a lot of the time you need to install things with sudo
, I don't see its use here as any special concern. It's mildly disconcerting, yes, but no moreso than sudo apt install ...
.
1: There are significantly safer package managers, of course -- I'm only talking about standard ones like APT and yum.
2: ...while being careful with your copy/pastes, naturally. If you don't know why you should be careful with your copy/pastes, consider this HTML: Use this command: <code>echo 'Hello<span style="font-size: 0">, Evil</span>!'</code>
. To be safe, try pasting into a (GUI) text editor, and ensure you copied what you think you did. If you didn't, then stop trusting that server immediately.
3: You can actually detect whether the script is just being downloaded or being downloaded-and-executed, because interpreting a script with Bash takes a different amount of time than saving it to a file, and Linux's pipe system can "back up", which can make those timing differences visible to the server. If you ran the exact curl | sudo bash
command they gave, your examination is (at least if it's a malicious server...) meaningless.
4: Then again, it looks like NodeSource is creating some sort of custom installer, which wouldn't be signed by the Node team anyway, so... I'm not convinced that it's less safe in this particular case.
4
Upvoted, but you did miss a few important considerations. 1) Make sure the source of the download is trustworthy (not some fly-by-night domain - HTTPS is free these days, and never did mean a domain wasn't malicious - or a writable file in some cloud or anything). 2) Bear in mind that you're trusting the server absolutely, which is not necessary. Linux package managers (for example) usually support and sometimes require a GPG signature or similar, so even if somebody compromised the server and replaced the package, it would get rejected. Bash has no such protection.
– CBHacking
Jul 12 at 23:47
3
@CBHacking: Most Linux distros ships with a default set of trusted keys roots for the package managers. Even though this usually uses GPG, the trust mechanism is set up essentially like a PKI. Using PKI wouldn't really be more secure.
– Lie Ryan
Jul 13 at 12:25
1
As a reminder, you should be careful if you're copying text into your terminal: thejh.net/misc/website-terminal-copy-paste ; Also IDN homograph attacks might make it a bit more difficult to verify server identity.
– Larkeith
Jul 13 at 23:29
2
@NicHartley, ...also, note that rpm and dpkg aren't the only competing package formats out there. Consider Nix (wherein all builds are run in a networkless sandbox with only access to their declared dependencies; wherein packages aren't allowed to create setuid files; wherein all software is addressed by a hash of its sources, dependencies and build steps) as an alternative that does far better than any of them, and thus far, far better than thecurl | bash
travesty.
– Charles Duffy
Jul 14 at 13:27
1
@NicHartley, ...I've put an answer together; your feedback on what I covered/missed/probably should have left out would be welcome.
– Charles Duffy
Jul 15 at 12:12
|
show 7 more comments
It it's about as safe as any other standard1 installation method as long as you:
- Use HTTPS (and reject certificate errors)
- Are confident in your certificate trust store
- Trust the server you're downloading from
You can, and should, separate the steps out -- download the script2, inspect it, and see if it's doing anything fishy before running the script you downloaded3. This is a good idea. It won't hurt anything if you do it and you might catch a compromise, which you can report to the source and the community at large. Be prepared to dig through quite a lot of Bash, if my experience with such things is any indicator. You can also try 'expanding' it, downloading any scripts that it would separately and tweaking the script to call those, if you're particularly worried about evil servers, but at some point you have to decide to just use a different server if you trust the first one so little.
Be aware that if the server (deb.nodesource.com
) is compromised, you basically have no recourse. Many package managers offer to verify GPG signatures on packages, and even though a fundamental part of the keysigning architecture is broken, this does still by and large work. You can manually specify the CA for wget and curl, though this only proves you're really connecting to that server, not that the server is serving safe code or that it's legitimate code from the creators.4
If you're worried about arbitrary code execution, APT definitely allows that, and I'm fairly confident both Homebrew and Yum do as well. So comparatively, it's not unsafe. This method allows greater visibility; you know precisely what's happening: A file is being downloaded, and then interpreted by Bash as a script. Odds are good you have enough knowledge already to start investigating the script. At worst, the Bash may call another language you don't know, or download and run a compiled executable, but even those actions can be noticed beforehand and, if you're so inclined, investigated.
As a side note, given that a lot of the time you need to install things with sudo
, I don't see its use here as any special concern. It's mildly disconcerting, yes, but no moreso than sudo apt install ...
.
1: There are significantly safer package managers, of course -- I'm only talking about standard ones like APT and yum.
2: ...while being careful with your copy/pastes, naturally. If you don't know why you should be careful with your copy/pastes, consider this HTML: Use this command: <code>echo 'Hello<span style="font-size: 0">, Evil</span>!'</code>
. To be safe, try pasting into a (GUI) text editor, and ensure you copied what you think you did. If you didn't, then stop trusting that server immediately.
3: You can actually detect whether the script is just being downloaded or being downloaded-and-executed, because interpreting a script with Bash takes a different amount of time than saving it to a file, and Linux's pipe system can "back up", which can make those timing differences visible to the server. If you ran the exact curl | sudo bash
command they gave, your examination is (at least if it's a malicious server...) meaningless.
4: Then again, it looks like NodeSource is creating some sort of custom installer, which wouldn't be signed by the Node team anyway, so... I'm not convinced that it's less safe in this particular case.
4
Upvoted, but you did miss a few important considerations. 1) Make sure the source of the download is trustworthy (not some fly-by-night domain - HTTPS is free these days, and never did mean a domain wasn't malicious - or a writable file in some cloud or anything). 2) Bear in mind that you're trusting the server absolutely, which is not necessary. Linux package managers (for example) usually support and sometimes require a GPG signature or similar, so even if somebody compromised the server and replaced the package, it would get rejected. Bash has no such protection.
– CBHacking
Jul 12 at 23:47
3
@CBHacking: Most Linux distros ships with a default set of trusted keys roots for the package managers. Even though this usually uses GPG, the trust mechanism is set up essentially like a PKI. Using PKI wouldn't really be more secure.
– Lie Ryan
Jul 13 at 12:25
1
As a reminder, you should be careful if you're copying text into your terminal: thejh.net/misc/website-terminal-copy-paste ; Also IDN homograph attacks might make it a bit more difficult to verify server identity.
– Larkeith
Jul 13 at 23:29
2
@NicHartley, ...also, note that rpm and dpkg aren't the only competing package formats out there. Consider Nix (wherein all builds are run in a networkless sandbox with only access to their declared dependencies; wherein packages aren't allowed to create setuid files; wherein all software is addressed by a hash of its sources, dependencies and build steps) as an alternative that does far better than any of them, and thus far, far better than thecurl | bash
travesty.
– Charles Duffy
Jul 14 at 13:27
1
@NicHartley, ...I've put an answer together; your feedback on what I covered/missed/probably should have left out would be welcome.
– Charles Duffy
Jul 15 at 12:12
|
show 7 more comments
It it's about as safe as any other standard1 installation method as long as you:
- Use HTTPS (and reject certificate errors)
- Are confident in your certificate trust store
- Trust the server you're downloading from
You can, and should, separate the steps out -- download the script2, inspect it, and see if it's doing anything fishy before running the script you downloaded3. This is a good idea. It won't hurt anything if you do it and you might catch a compromise, which you can report to the source and the community at large. Be prepared to dig through quite a lot of Bash, if my experience with such things is any indicator. You can also try 'expanding' it, downloading any scripts that it would separately and tweaking the script to call those, if you're particularly worried about evil servers, but at some point you have to decide to just use a different server if you trust the first one so little.
Be aware that if the server (deb.nodesource.com
) is compromised, you basically have no recourse. Many package managers offer to verify GPG signatures on packages, and even though a fundamental part of the keysigning architecture is broken, this does still by and large work. You can manually specify the CA for wget and curl, though this only proves you're really connecting to that server, not that the server is serving safe code or that it's legitimate code from the creators.4
If you're worried about arbitrary code execution, APT definitely allows that, and I'm fairly confident both Homebrew and Yum do as well. So comparatively, it's not unsafe. This method allows greater visibility; you know precisely what's happening: A file is being downloaded, and then interpreted by Bash as a script. Odds are good you have enough knowledge already to start investigating the script. At worst, the Bash may call another language you don't know, or download and run a compiled executable, but even those actions can be noticed beforehand and, if you're so inclined, investigated.
As a side note, given that a lot of the time you need to install things with sudo
, I don't see its use here as any special concern. It's mildly disconcerting, yes, but no moreso than sudo apt install ...
.
1: There are significantly safer package managers, of course -- I'm only talking about standard ones like APT and yum.
2: ...while being careful with your copy/pastes, naturally. If you don't know why you should be careful with your copy/pastes, consider this HTML: Use this command: <code>echo 'Hello<span style="font-size: 0">, Evil</span>!'</code>
. To be safe, try pasting into a (GUI) text editor, and ensure you copied what you think you did. If you didn't, then stop trusting that server immediately.
3: You can actually detect whether the script is just being downloaded or being downloaded-and-executed, because interpreting a script with Bash takes a different amount of time than saving it to a file, and Linux's pipe system can "back up", which can make those timing differences visible to the server. If you ran the exact curl | sudo bash
command they gave, your examination is (at least if it's a malicious server...) meaningless.
4: Then again, it looks like NodeSource is creating some sort of custom installer, which wouldn't be signed by the Node team anyway, so... I'm not convinced that it's less safe in this particular case.
It it's about as safe as any other standard1 installation method as long as you:
- Use HTTPS (and reject certificate errors)
- Are confident in your certificate trust store
- Trust the server you're downloading from
You can, and should, separate the steps out -- download the script2, inspect it, and see if it's doing anything fishy before running the script you downloaded3. This is a good idea. It won't hurt anything if you do it and you might catch a compromise, which you can report to the source and the community at large. Be prepared to dig through quite a lot of Bash, if my experience with such things is any indicator. You can also try 'expanding' it, downloading any scripts that it would separately and tweaking the script to call those, if you're particularly worried about evil servers, but at some point you have to decide to just use a different server if you trust the first one so little.
Be aware that if the server (deb.nodesource.com
) is compromised, you basically have no recourse. Many package managers offer to verify GPG signatures on packages, and even though a fundamental part of the keysigning architecture is broken, this does still by and large work. You can manually specify the CA for wget and curl, though this only proves you're really connecting to that server, not that the server is serving safe code or that it's legitimate code from the creators.4
If you're worried about arbitrary code execution, APT definitely allows that, and I'm fairly confident both Homebrew and Yum do as well. So comparatively, it's not unsafe. This method allows greater visibility; you know precisely what's happening: A file is being downloaded, and then interpreted by Bash as a script. Odds are good you have enough knowledge already to start investigating the script. At worst, the Bash may call another language you don't know, or download and run a compiled executable, but even those actions can be noticed beforehand and, if you're so inclined, investigated.
As a side note, given that a lot of the time you need to install things with sudo
, I don't see its use here as any special concern. It's mildly disconcerting, yes, but no moreso than sudo apt install ...
.
1: There are significantly safer package managers, of course -- I'm only talking about standard ones like APT and yum.
2: ...while being careful with your copy/pastes, naturally. If you don't know why you should be careful with your copy/pastes, consider this HTML: Use this command: <code>echo 'Hello<span style="font-size: 0">, Evil</span>!'</code>
. To be safe, try pasting into a (GUI) text editor, and ensure you copied what you think you did. If you didn't, then stop trusting that server immediately.
3: You can actually detect whether the script is just being downloaded or being downloaded-and-executed, because interpreting a script with Bash takes a different amount of time than saving it to a file, and Linux's pipe system can "back up", which can make those timing differences visible to the server. If you ran the exact curl | sudo bash
command they gave, your examination is (at least if it's a malicious server...) meaningless.
4: Then again, it looks like NodeSource is creating some sort of custom installer, which wouldn't be signed by the Node team anyway, so... I'm not convinced that it's less safe in this particular case.
edited Jul 15 at 14:18
answered Jul 12 at 18:51
Nic HartleyNic Hartley
1,61710 silver badges20 bronze badges
1,61710 silver badges20 bronze badges
4
Upvoted, but you did miss a few important considerations. 1) Make sure the source of the download is trustworthy (not some fly-by-night domain - HTTPS is free these days, and never did mean a domain wasn't malicious - or a writable file in some cloud or anything). 2) Bear in mind that you're trusting the server absolutely, which is not necessary. Linux package managers (for example) usually support and sometimes require a GPG signature or similar, so even if somebody compromised the server and replaced the package, it would get rejected. Bash has no such protection.
– CBHacking
Jul 12 at 23:47
3
@CBHacking: Most Linux distros ships with a default set of trusted keys roots for the package managers. Even though this usually uses GPG, the trust mechanism is set up essentially like a PKI. Using PKI wouldn't really be more secure.
– Lie Ryan
Jul 13 at 12:25
1
As a reminder, you should be careful if you're copying text into your terminal: thejh.net/misc/website-terminal-copy-paste ; Also IDN homograph attacks might make it a bit more difficult to verify server identity.
– Larkeith
Jul 13 at 23:29
2
@NicHartley, ...also, note that rpm and dpkg aren't the only competing package formats out there. Consider Nix (wherein all builds are run in a networkless sandbox with only access to their declared dependencies; wherein packages aren't allowed to create setuid files; wherein all software is addressed by a hash of its sources, dependencies and build steps) as an alternative that does far better than any of them, and thus far, far better than thecurl | bash
travesty.
– Charles Duffy
Jul 14 at 13:27
1
@NicHartley, ...I've put an answer together; your feedback on what I covered/missed/probably should have left out would be welcome.
– Charles Duffy
Jul 15 at 12:12
|
show 7 more comments
4
Upvoted, but you did miss a few important considerations. 1) Make sure the source of the download is trustworthy (not some fly-by-night domain - HTTPS is free these days, and never did mean a domain wasn't malicious - or a writable file in some cloud or anything). 2) Bear in mind that you're trusting the server absolutely, which is not necessary. Linux package managers (for example) usually support and sometimes require a GPG signature or similar, so even if somebody compromised the server and replaced the package, it would get rejected. Bash has no such protection.
– CBHacking
Jul 12 at 23:47
3
@CBHacking: Most Linux distros ships with a default set of trusted keys roots for the package managers. Even though this usually uses GPG, the trust mechanism is set up essentially like a PKI. Using PKI wouldn't really be more secure.
– Lie Ryan
Jul 13 at 12:25
1
As a reminder, you should be careful if you're copying text into your terminal: thejh.net/misc/website-terminal-copy-paste ; Also IDN homograph attacks might make it a bit more difficult to verify server identity.
– Larkeith
Jul 13 at 23:29
2
@NicHartley, ...also, note that rpm and dpkg aren't the only competing package formats out there. Consider Nix (wherein all builds are run in a networkless sandbox with only access to their declared dependencies; wherein packages aren't allowed to create setuid files; wherein all software is addressed by a hash of its sources, dependencies and build steps) as an alternative that does far better than any of them, and thus far, far better than thecurl | bash
travesty.
– Charles Duffy
Jul 14 at 13:27
1
@NicHartley, ...I've put an answer together; your feedback on what I covered/missed/probably should have left out would be welcome.
– Charles Duffy
Jul 15 at 12:12
4
4
Upvoted, but you did miss a few important considerations. 1) Make sure the source of the download is trustworthy (not some fly-by-night domain - HTTPS is free these days, and never did mean a domain wasn't malicious - or a writable file in some cloud or anything). 2) Bear in mind that you're trusting the server absolutely, which is not necessary. Linux package managers (for example) usually support and sometimes require a GPG signature or similar, so even if somebody compromised the server and replaced the package, it would get rejected. Bash has no such protection.
– CBHacking
Jul 12 at 23:47
Upvoted, but you did miss a few important considerations. 1) Make sure the source of the download is trustworthy (not some fly-by-night domain - HTTPS is free these days, and never did mean a domain wasn't malicious - or a writable file in some cloud or anything). 2) Bear in mind that you're trusting the server absolutely, which is not necessary. Linux package managers (for example) usually support and sometimes require a GPG signature or similar, so even if somebody compromised the server and replaced the package, it would get rejected. Bash has no such protection.
– CBHacking
Jul 12 at 23:47
3
3
@CBHacking: Most Linux distros ships with a default set of trusted keys roots for the package managers. Even though this usually uses GPG, the trust mechanism is set up essentially like a PKI. Using PKI wouldn't really be more secure.
– Lie Ryan
Jul 13 at 12:25
@CBHacking: Most Linux distros ships with a default set of trusted keys roots for the package managers. Even though this usually uses GPG, the trust mechanism is set up essentially like a PKI. Using PKI wouldn't really be more secure.
– Lie Ryan
Jul 13 at 12:25
1
1
As a reminder, you should be careful if you're copying text into your terminal: thejh.net/misc/website-terminal-copy-paste ; Also IDN homograph attacks might make it a bit more difficult to verify server identity.
– Larkeith
Jul 13 at 23:29
As a reminder, you should be careful if you're copying text into your terminal: thejh.net/misc/website-terminal-copy-paste ; Also IDN homograph attacks might make it a bit more difficult to verify server identity.
– Larkeith
Jul 13 at 23:29
2
2
@NicHartley, ...also, note that rpm and dpkg aren't the only competing package formats out there. Consider Nix (wherein all builds are run in a networkless sandbox with only access to their declared dependencies; wherein packages aren't allowed to create setuid files; wherein all software is addressed by a hash of its sources, dependencies and build steps) as an alternative that does far better than any of them, and thus far, far better than the
curl | bash
travesty.– Charles Duffy
Jul 14 at 13:27
@NicHartley, ...also, note that rpm and dpkg aren't the only competing package formats out there. Consider Nix (wherein all builds are run in a networkless sandbox with only access to their declared dependencies; wherein packages aren't allowed to create setuid files; wherein all software is addressed by a hash of its sources, dependencies and build steps) as an alternative that does far better than any of them, and thus far, far better than the
curl | bash
travesty.– Charles Duffy
Jul 14 at 13:27
1
1
@NicHartley, ...I've put an answer together; your feedback on what I covered/missed/probably should have left out would be welcome.
– Charles Duffy
Jul 15 at 12:12
@NicHartley, ...I've put an answer together; your feedback on what I covered/missed/probably should have left out would be welcome.
– Charles Duffy
Jul 15 at 12:12
|
show 7 more comments
There are three major security features you'd want to look at when
comparing curl ... | bash
installation to a Unix distribution
packaging system like apt
or yum
.
The first is ensuring that you are requesting the correct file(s). Apt
does this by keeping its own mapping of package names to more complex
URLs; the OCaml package manager is just opam
offering fairly easy
verification. By contrast, if I use opam's curl/shell installation
method, I need to verify the URLhttps://raw.githubusercontent.com/ocaml/opam/master/shell/install.sh
,
using my personal knowledge that raw.githubusercontent.com
is a
well-run site (owned and run by GitHub) that unlikely to have its
certificate compromised, that it is indeed the correct site for
downloading raw content from GitHub projects, the ocaml
GitHub
account is indeed the vendor whose software I want to install, andopam/master/shell/install.sh
is the correct path to the software I
want. This isn't terribly difficult, but you can see the opportunities
for human error here (as compared to verifying apt-get install opam
)
and how they could be magnified with even less clear sites and URLs.
In this particular case, too, an independent compromise of either of
the two vendors above (GitHub and the OCaml project) could compromise
the download without the other being able to do much about it.
The second security feature is confirming that the file you got is
actually the correct one for the name above. The curl/shell method
relies solely on the security provided by HTTPS, which could be
compromised on the server side (unlikely so long as the server
operator takes great care) and on the client side (far more frequent
than you'd think in this age of TLS interception). By contrast, apt
generally downloads via HTTP (thus rendering the entire TLS PKI
irrelevant) and checks the integrity of downloads via a PGP signature,
which is considerably easier to secure (because the secret keys don't
need to be online, etc.).
The third is ensuring that, when you have the correct files from the
vendor, that the vendor itself is not distributing malicious files.
This comes down to how reliable the vendor's packaging and review
processes are. In particular, I'd tend to trust the official Debian or
Ubuntu teams that sign release packages to have produced a
better-vetted product, both because that's the primary job of those
teams and because they're doing an extra layer of review on top of
what the upstream vendor did.
There's also an additional sometimes-valuable feature provided by
packaging systems such as apt that may or may not be provided by
systems using the curl/shell install procedure: audit of installed
files. Because apt, yum, etc. have hashes for most of the files
supplied by a package, it's possible to check an existing package
installation (using programs such as debsums
or rpm -v
) to see if
any of those installed files have been modified.
The curl/shell install method can offer a couple of potential
advantages over using a packaging system such as apt
or yum
:
You're generally getting a much more recent version of the software
and, especially if it's a packging system itself (such as pip or
Haskell Stack) it may do regular checks (when used) to see if it's
up-to-date and offer an update system.Some systems allow you to do a non-root (i.e., in your home
directory, owned by you) install of the software. For example,
while theopam
binary installed by the aboveinstall.sh
is put
into/usr/local/bin/
by default (requiring sudo access on many
systems), there's no reason you can't put it in~/.local/bin/
or
similar, thus never giving the install script or subsequent
software any root access at all. This has the advantage of ensuring
that root compromise is avoided, though it does make it easier for
later software runs to compromise the installed version of the
software that you're using.
1
You have missed one disadvantage: your local package manager doesn’t know about software installed this way. So any automated checks/downloads for patched versions won’t work.
– Gaius
Jul 15 at 10:30
Actually, many software systems where this is both a recommended and typical installation know how to do their own update checks and updates. Often they are (usual language-specific) packaging systems themselves (e.g., pip, rvm, Haskell Stack). But this is certainly something to check and keep in mind for whatever particular system you install this way!
– Curt J. Sampson
Jul 15 at 13:52
add a comment |
There are three major security features you'd want to look at when
comparing curl ... | bash
installation to a Unix distribution
packaging system like apt
or yum
.
The first is ensuring that you are requesting the correct file(s). Apt
does this by keeping its own mapping of package names to more complex
URLs; the OCaml package manager is just opam
offering fairly easy
verification. By contrast, if I use opam's curl/shell installation
method, I need to verify the URLhttps://raw.githubusercontent.com/ocaml/opam/master/shell/install.sh
,
using my personal knowledge that raw.githubusercontent.com
is a
well-run site (owned and run by GitHub) that unlikely to have its
certificate compromised, that it is indeed the correct site for
downloading raw content from GitHub projects, the ocaml
GitHub
account is indeed the vendor whose software I want to install, andopam/master/shell/install.sh
is the correct path to the software I
want. This isn't terribly difficult, but you can see the opportunities
for human error here (as compared to verifying apt-get install opam
)
and how they could be magnified with even less clear sites and URLs.
In this particular case, too, an independent compromise of either of
the two vendors above (GitHub and the OCaml project) could compromise
the download without the other being able to do much about it.
The second security feature is confirming that the file you got is
actually the correct one for the name above. The curl/shell method
relies solely on the security provided by HTTPS, which could be
compromised on the server side (unlikely so long as the server
operator takes great care) and on the client side (far more frequent
than you'd think in this age of TLS interception). By contrast, apt
generally downloads via HTTP (thus rendering the entire TLS PKI
irrelevant) and checks the integrity of downloads via a PGP signature,
which is considerably easier to secure (because the secret keys don't
need to be online, etc.).
The third is ensuring that, when you have the correct files from the
vendor, that the vendor itself is not distributing malicious files.
This comes down to how reliable the vendor's packaging and review
processes are. In particular, I'd tend to trust the official Debian or
Ubuntu teams that sign release packages to have produced a
better-vetted product, both because that's the primary job of those
teams and because they're doing an extra layer of review on top of
what the upstream vendor did.
There's also an additional sometimes-valuable feature provided by
packaging systems such as apt that may or may not be provided by
systems using the curl/shell install procedure: audit of installed
files. Because apt, yum, etc. have hashes for most of the files
supplied by a package, it's possible to check an existing package
installation (using programs such as debsums
or rpm -v
) to see if
any of those installed files have been modified.
The curl/shell install method can offer a couple of potential
advantages over using a packaging system such as apt
or yum
:
You're generally getting a much more recent version of the software
and, especially if it's a packging system itself (such as pip or
Haskell Stack) it may do regular checks (when used) to see if it's
up-to-date and offer an update system.Some systems allow you to do a non-root (i.e., in your home
directory, owned by you) install of the software. For example,
while theopam
binary installed by the aboveinstall.sh
is put
into/usr/local/bin/
by default (requiring sudo access on many
systems), there's no reason you can't put it in~/.local/bin/
or
similar, thus never giving the install script or subsequent
software any root access at all. This has the advantage of ensuring
that root compromise is avoided, though it does make it easier for
later software runs to compromise the installed version of the
software that you're using.
1
You have missed one disadvantage: your local package manager doesn’t know about software installed this way. So any automated checks/downloads for patched versions won’t work.
– Gaius
Jul 15 at 10:30
Actually, many software systems where this is both a recommended and typical installation know how to do their own update checks and updates. Often they are (usual language-specific) packaging systems themselves (e.g., pip, rvm, Haskell Stack). But this is certainly something to check and keep in mind for whatever particular system you install this way!
– Curt J. Sampson
Jul 15 at 13:52
add a comment |
There are three major security features you'd want to look at when
comparing curl ... | bash
installation to a Unix distribution
packaging system like apt
or yum
.
The first is ensuring that you are requesting the correct file(s). Apt
does this by keeping its own mapping of package names to more complex
URLs; the OCaml package manager is just opam
offering fairly easy
verification. By contrast, if I use opam's curl/shell installation
method, I need to verify the URLhttps://raw.githubusercontent.com/ocaml/opam/master/shell/install.sh
,
using my personal knowledge that raw.githubusercontent.com
is a
well-run site (owned and run by GitHub) that unlikely to have its
certificate compromised, that it is indeed the correct site for
downloading raw content from GitHub projects, the ocaml
GitHub
account is indeed the vendor whose software I want to install, andopam/master/shell/install.sh
is the correct path to the software I
want. This isn't terribly difficult, but you can see the opportunities
for human error here (as compared to verifying apt-get install opam
)
and how they could be magnified with even less clear sites and URLs.
In this particular case, too, an independent compromise of either of
the two vendors above (GitHub and the OCaml project) could compromise
the download without the other being able to do much about it.
The second security feature is confirming that the file you got is
actually the correct one for the name above. The curl/shell method
relies solely on the security provided by HTTPS, which could be
compromised on the server side (unlikely so long as the server
operator takes great care) and on the client side (far more frequent
than you'd think in this age of TLS interception). By contrast, apt
generally downloads via HTTP (thus rendering the entire TLS PKI
irrelevant) and checks the integrity of downloads via a PGP signature,
which is considerably easier to secure (because the secret keys don't
need to be online, etc.).
The third is ensuring that, when you have the correct files from the
vendor, that the vendor itself is not distributing malicious files.
This comes down to how reliable the vendor's packaging and review
processes are. In particular, I'd tend to trust the official Debian or
Ubuntu teams that sign release packages to have produced a
better-vetted product, both because that's the primary job of those
teams and because they're doing an extra layer of review on top of
what the upstream vendor did.
There's also an additional sometimes-valuable feature provided by
packaging systems such as apt that may or may not be provided by
systems using the curl/shell install procedure: audit of installed
files. Because apt, yum, etc. have hashes for most of the files
supplied by a package, it's possible to check an existing package
installation (using programs such as debsums
or rpm -v
) to see if
any of those installed files have been modified.
The curl/shell install method can offer a couple of potential
advantages over using a packaging system such as apt
or yum
:
You're generally getting a much more recent version of the software
and, especially if it's a packging system itself (such as pip or
Haskell Stack) it may do regular checks (when used) to see if it's
up-to-date and offer an update system.Some systems allow you to do a non-root (i.e., in your home
directory, owned by you) install of the software. For example,
while theopam
binary installed by the aboveinstall.sh
is put
into/usr/local/bin/
by default (requiring sudo access on many
systems), there's no reason you can't put it in~/.local/bin/
or
similar, thus never giving the install script or subsequent
software any root access at all. This has the advantage of ensuring
that root compromise is avoided, though it does make it easier for
later software runs to compromise the installed version of the
software that you're using.
There are three major security features you'd want to look at when
comparing curl ... | bash
installation to a Unix distribution
packaging system like apt
or yum
.
The first is ensuring that you are requesting the correct file(s). Apt
does this by keeping its own mapping of package names to more complex
URLs; the OCaml package manager is just opam
offering fairly easy
verification. By contrast, if I use opam's curl/shell installation
method, I need to verify the URLhttps://raw.githubusercontent.com/ocaml/opam/master/shell/install.sh
,
using my personal knowledge that raw.githubusercontent.com
is a
well-run site (owned and run by GitHub) that unlikely to have its
certificate compromised, that it is indeed the correct site for
downloading raw content from GitHub projects, the ocaml
GitHub
account is indeed the vendor whose software I want to install, andopam/master/shell/install.sh
is the correct path to the software I
want. This isn't terribly difficult, but you can see the opportunities
for human error here (as compared to verifying apt-get install opam
)
and how they could be magnified with even less clear sites and URLs.
In this particular case, too, an independent compromise of either of
the two vendors above (GitHub and the OCaml project) could compromise
the download without the other being able to do much about it.
The second security feature is confirming that the file you got is
actually the correct one for the name above. The curl/shell method
relies solely on the security provided by HTTPS, which could be
compromised on the server side (unlikely so long as the server
operator takes great care) and on the client side (far more frequent
than you'd think in this age of TLS interception). By contrast, apt
generally downloads via HTTP (thus rendering the entire TLS PKI
irrelevant) and checks the integrity of downloads via a PGP signature,
which is considerably easier to secure (because the secret keys don't
need to be online, etc.).
The third is ensuring that, when you have the correct files from the
vendor, that the vendor itself is not distributing malicious files.
This comes down to how reliable the vendor's packaging and review
processes are. In particular, I'd tend to trust the official Debian or
Ubuntu teams that sign release packages to have produced a
better-vetted product, both because that's the primary job of those
teams and because they're doing an extra layer of review on top of
what the upstream vendor did.
There's also an additional sometimes-valuable feature provided by
packaging systems such as apt that may or may not be provided by
systems using the curl/shell install procedure: audit of installed
files. Because apt, yum, etc. have hashes for most of the files
supplied by a package, it's possible to check an existing package
installation (using programs such as debsums
or rpm -v
) to see if
any of those installed files have been modified.
The curl/shell install method can offer a couple of potential
advantages over using a packaging system such as apt
or yum
:
You're generally getting a much more recent version of the software
and, especially if it's a packging system itself (such as pip or
Haskell Stack) it may do regular checks (when used) to see if it's
up-to-date and offer an update system.Some systems allow you to do a non-root (i.e., in your home
directory, owned by you) install of the software. For example,
while theopam
binary installed by the aboveinstall.sh
is put
into/usr/local/bin/
by default (requiring sudo access on many
systems), there's no reason you can't put it in~/.local/bin/
or
similar, thus never giving the install script or subsequent
software any root access at all. This has the advantage of ensuring
that root compromise is avoided, though it does make it easier for
later software runs to compromise the installed version of the
software that you're using.
edited Jul 15 at 13:55
answered Jul 13 at 4:13
Curt J. SampsonCurt J. Sampson
2811 silver badge6 bronze badges
2811 silver badge6 bronze badges
1
You have missed one disadvantage: your local package manager doesn’t know about software installed this way. So any automated checks/downloads for patched versions won’t work.
– Gaius
Jul 15 at 10:30
Actually, many software systems where this is both a recommended and typical installation know how to do their own update checks and updates. Often they are (usual language-specific) packaging systems themselves (e.g., pip, rvm, Haskell Stack). But this is certainly something to check and keep in mind for whatever particular system you install this way!
– Curt J. Sampson
Jul 15 at 13:52
add a comment |
1
You have missed one disadvantage: your local package manager doesn’t know about software installed this way. So any automated checks/downloads for patched versions won’t work.
– Gaius
Jul 15 at 10:30
Actually, many software systems where this is both a recommended and typical installation know how to do their own update checks and updates. Often they are (usual language-specific) packaging systems themselves (e.g., pip, rvm, Haskell Stack). But this is certainly something to check and keep in mind for whatever particular system you install this way!
– Curt J. Sampson
Jul 15 at 13:52
1
1
You have missed one disadvantage: your local package manager doesn’t know about software installed this way. So any automated checks/downloads for patched versions won’t work.
– Gaius
Jul 15 at 10:30
You have missed one disadvantage: your local package manager doesn’t know about software installed this way. So any automated checks/downloads for patched versions won’t work.
– Gaius
Jul 15 at 10:30
Actually, many software systems where this is both a recommended and typical installation know how to do their own update checks and updates. Often they are (usual language-specific) packaging systems themselves (e.g., pip, rvm, Haskell Stack). But this is certainly something to check and keep in mind for whatever particular system you install this way!
– Curt J. Sampson
Jul 15 at 13:52
Actually, many software systems where this is both a recommended and typical installation know how to do their own update checks and updates. Often they are (usual language-specific) packaging systems themselves (e.g., pip, rvm, Haskell Stack). But this is certainly something to check and keep in mind for whatever particular system you install this way!
– Curt J. Sampson
Jul 15 at 13:52
add a comment |
Submitting an answer to my own question. Not sure if this is the best answer, but I'm hoping other answers will address these points.
curl something | sudo bash -
on Linux is equally safe as downloading something on Windows and right-clicking run as administrator. One can argue that this is 'reasonably safe', but as a recent xkcd suggests, nobody really knows how bad computer security is these days. In any event, this method is NOT as safe as other installation methods.
All safer methods include a step to verify the download integrity before installing anything, and there's no good reason to skip this step. Installers like apt
have some form of this step built in. The goal is to ensure that what you have downloaded is what the publisher intended. This doesn't guarantee that the software is free of its own vulnerabilities, but it should at least protect against simple attacks that replace the download with malware. The essence is simply to verify the MD5 and SHA256 checksums posted by the software publisher. Some further improvements are possible:
- It's better to get these checksums via a different network path, such as by calling a friend in another country, which would protect against MITM attacks.
- It's better to get the checksums at least a day earlier/later, which would protect in case the publisher's website was briefly taken over but the takeover was stopped within a day.
- It's better to verify the checksums themselves using GPG, which would protect in case the publisher's website was compromised but their GPG private key wasn't.
One side comment: Some sites say you should download the sh
script and then inspect it before running it. Unfortunately, this gives a false sense of security unless you vet the script with a practically impossible level of precision. The shell script is probably a few hundred lines, and very tiny changes (such as an obfuscated one-character change to a URL) can convert a shell script into a malware installer.
2
"equally safe as downloading something on Windows and right-clicking run as administrator" - While that comparison may be true, it's not common for Windows software vendors to instruct users to start their installer that way.
– aroth
Jul 14 at 12:21
@aroth Good point: Even Windows vendors are moving to (somewhat) safer install methods nowadays.
– Krubo
Jul 15 at 2:34
add a comment |
Submitting an answer to my own question. Not sure if this is the best answer, but I'm hoping other answers will address these points.
curl something | sudo bash -
on Linux is equally safe as downloading something on Windows and right-clicking run as administrator. One can argue that this is 'reasonably safe', but as a recent xkcd suggests, nobody really knows how bad computer security is these days. In any event, this method is NOT as safe as other installation methods.
All safer methods include a step to verify the download integrity before installing anything, and there's no good reason to skip this step. Installers like apt
have some form of this step built in. The goal is to ensure that what you have downloaded is what the publisher intended. This doesn't guarantee that the software is free of its own vulnerabilities, but it should at least protect against simple attacks that replace the download with malware. The essence is simply to verify the MD5 and SHA256 checksums posted by the software publisher. Some further improvements are possible:
- It's better to get these checksums via a different network path, such as by calling a friend in another country, which would protect against MITM attacks.
- It's better to get the checksums at least a day earlier/later, which would protect in case the publisher's website was briefly taken over but the takeover was stopped within a day.
- It's better to verify the checksums themselves using GPG, which would protect in case the publisher's website was compromised but their GPG private key wasn't.
One side comment: Some sites say you should download the sh
script and then inspect it before running it. Unfortunately, this gives a false sense of security unless you vet the script with a practically impossible level of precision. The shell script is probably a few hundred lines, and very tiny changes (such as an obfuscated one-character change to a URL) can convert a shell script into a malware installer.
2
"equally safe as downloading something on Windows and right-clicking run as administrator" - While that comparison may be true, it's not common for Windows software vendors to instruct users to start their installer that way.
– aroth
Jul 14 at 12:21
@aroth Good point: Even Windows vendors are moving to (somewhat) safer install methods nowadays.
– Krubo
Jul 15 at 2:34
add a comment |
Submitting an answer to my own question. Not sure if this is the best answer, but I'm hoping other answers will address these points.
curl something | sudo bash -
on Linux is equally safe as downloading something on Windows and right-clicking run as administrator. One can argue that this is 'reasonably safe', but as a recent xkcd suggests, nobody really knows how bad computer security is these days. In any event, this method is NOT as safe as other installation methods.
All safer methods include a step to verify the download integrity before installing anything, and there's no good reason to skip this step. Installers like apt
have some form of this step built in. The goal is to ensure that what you have downloaded is what the publisher intended. This doesn't guarantee that the software is free of its own vulnerabilities, but it should at least protect against simple attacks that replace the download with malware. The essence is simply to verify the MD5 and SHA256 checksums posted by the software publisher. Some further improvements are possible:
- It's better to get these checksums via a different network path, such as by calling a friend in another country, which would protect against MITM attacks.
- It's better to get the checksums at least a day earlier/later, which would protect in case the publisher's website was briefly taken over but the takeover was stopped within a day.
- It's better to verify the checksums themselves using GPG, which would protect in case the publisher's website was compromised but their GPG private key wasn't.
One side comment: Some sites say you should download the sh
script and then inspect it before running it. Unfortunately, this gives a false sense of security unless you vet the script with a practically impossible level of precision. The shell script is probably a few hundred lines, and very tiny changes (such as an obfuscated one-character change to a URL) can convert a shell script into a malware installer.
Submitting an answer to my own question. Not sure if this is the best answer, but I'm hoping other answers will address these points.
curl something | sudo bash -
on Linux is equally safe as downloading something on Windows and right-clicking run as administrator. One can argue that this is 'reasonably safe', but as a recent xkcd suggests, nobody really knows how bad computer security is these days. In any event, this method is NOT as safe as other installation methods.
All safer methods include a step to verify the download integrity before installing anything, and there's no good reason to skip this step. Installers like apt
have some form of this step built in. The goal is to ensure that what you have downloaded is what the publisher intended. This doesn't guarantee that the software is free of its own vulnerabilities, but it should at least protect against simple attacks that replace the download with malware. The essence is simply to verify the MD5 and SHA256 checksums posted by the software publisher. Some further improvements are possible:
- It's better to get these checksums via a different network path, such as by calling a friend in another country, which would protect against MITM attacks.
- It's better to get the checksums at least a day earlier/later, which would protect in case the publisher's website was briefly taken over but the takeover was stopped within a day.
- It's better to verify the checksums themselves using GPG, which would protect in case the publisher's website was compromised but their GPG private key wasn't.
One side comment: Some sites say you should download the sh
script and then inspect it before running it. Unfortunately, this gives a false sense of security unless you vet the script with a practically impossible level of precision. The shell script is probably a few hundred lines, and very tiny changes (such as an obfuscated one-character change to a URL) can convert a shell script into a malware installer.
edited Jul 13 at 16:45
answered Jul 12 at 23:50
KruboKrubo
3893 silver badges7 bronze badges
3893 silver badges7 bronze badges
2
"equally safe as downloading something on Windows and right-clicking run as administrator" - While that comparison may be true, it's not common for Windows software vendors to instruct users to start their installer that way.
– aroth
Jul 14 at 12:21
@aroth Good point: Even Windows vendors are moving to (somewhat) safer install methods nowadays.
– Krubo
Jul 15 at 2:34
add a comment |
2
"equally safe as downloading something on Windows and right-clicking run as administrator" - While that comparison may be true, it's not common for Windows software vendors to instruct users to start their installer that way.
– aroth
Jul 14 at 12:21
@aroth Good point: Even Windows vendors are moving to (somewhat) safer install methods nowadays.
– Krubo
Jul 15 at 2:34
2
2
"equally safe as downloading something on Windows and right-clicking run as administrator" - While that comparison may be true, it's not common for Windows software vendors to instruct users to start their installer that way.
– aroth
Jul 14 at 12:21
"equally safe as downloading something on Windows and right-clicking run as administrator" - While that comparison may be true, it's not common for Windows software vendors to instruct users to start their installer that way.
– aroth
Jul 14 at 12:21
@aroth Good point: Even Windows vendors are moving to (somewhat) safer install methods nowadays.
– Krubo
Jul 15 at 2:34
@aroth Good point: Even Windows vendors are moving to (somewhat) safer install methods nowadays.
– Krubo
Jul 15 at 2:34
add a comment |
"Reasonably Safe" depends on your goalposts, but curl | bash
is well behind state-of-the-art.
Let's take a look at the kind of verification one might want:
- Ensuring that someone malicious at your ISP can't do a man-in-the-middle to feed you arbitrary code.
- Ensuring that you're getting the same binaries the author published
- Ensuring you're getting the same binaries that someone downloading the same filename also got.
- Ensuring that the binaries you download reflect a specific, auditable set of sources and build steps, and can be reproduced from same.
- Separating installing software from running software -- if you're installing software to be run by an untrusted, low-privileged user, no high-privileged account should be put at risk in the process.
With curl | sudo bash
, you get only the first if that; with rpm
or dpkg
you get some of them; with nix
, you can get all of them.
Using
curl
to download viahttps
, you have some safety against a man-in-the-middle attacker, insofar as that attacker can't forge a certificate and key that's valid for the remote site. (You don't have safety against an attacker who broke into the remote server, or one who has access to the local CA your company put into all CA store lists on corporate-owned-hardware so they could MITM outgoing SSL connections for intentional "security" purposes!).This is the only threat model
curl | sudo bash
sometimes is successful at protecting you against.Ensuring that you're getting the same binaries the author published can be done with a digital signature by that author (Linux distributions typically distribute a keychain of OpenPGP keys belonging to individuals authorized to publish packages to that distribution, or have a key they use for packages they built themselves, and use access control measures to restrict which authors are able to get packages into their build systems).
Deployed correctly,
rpm
ordpkg
gives you this safety;curl | bash
does not.Ensuring that requesting the same name always returns the same binaries is trickier, if an authorized author's key could have been captured. This can be accomplished, however, if the content you're downloading is hash-addressed; to publish different content under the same name, an attacker would need to either decouple the inputs from the hash from the file's contents (trivially detected if it's the hash of the binary that's published.
Moving to hash-addressed build publication has two possible approaches:
If the hash is of the outputs of the build, an attacker's easiest approach is to find the mechanism by which the end-user looked up that hash and replace it with a malicious value -- so the point-of-attack moves, but the vulnerability itself does not.
If the hash is of the inputs to the build, checking that the output of the build genuinely matches those inputs requires more work (namely, rerunning the build!) to be done to check, but that check becomes far harder to evade.
The latter approach is the better one, even though it's expensive to check and puts extra work on the folks doing software packaging (to deal with any places the author of a program folded in timestamps or other non-reproducible elements to build process itself).
Dealing with malicious authors is not in the security model that
rpm
ordpkg
tries to address, and of course,curl | bash
doesn't do anything about it either.Separating installation from runtime is a matter of designing the serialization format up-front without dangerous features -- not supporting
setuid
orsetgid
bits, not supporting install-time unsandboxed run scripts with arbitrary code, etc.curl | sudo bash
gives you no protection here, butrpm
anddpkg
also don't.nix
, by contrast, lets any unprivileged user install software into the store -- but the NAR serialization format it uses won't represent setuid or setgid bits, that content in the store is unreferenced by any user account that doesn't explicitly request it or a piece of software that depends on it, and cases where software needssetuid
privileges to be installed require explicit out-of-band administrative action before those bits actually get set.Only oddball, niche, specialty software installation methods like
nix
get this right.
Keep in mind that unless the keys for the specific project come with your distro -- not impossible, obviously, but not necessarily likely -- you'll have to get them from somewhere. That "somewhere" currently defaults to a broken keyserver network. It would be worth mentioning other trusted methods. Also, you can effectively client-side pin a specific certificate withcurl
, to remove that one attack (not the others). Finally,curl | bash
is more likely to be up-to-date than anything from a package manager, precisely because it's so uncontrolled. Probably worth a mention if nothing else.
– Nic Hartley
Jul 15 at 14:15
Aside from those, which amount to minor nitpicks, I like this answer for offering a safer alternative to even "normal" installation methods, which even the currently highest-voted answer doesn't do. If only the author could have figured out how to incorporate smoothly.
– Nic Hartley
Jul 15 at 14:16
Oh oops, you do address my first point, I just missed that paragraph. Sorry. Ignore that bit.
– Nic Hartley
Jul 15 at 14:19
1
I do agree that keeping things up-to-date is a pain; the backlog of PRs awaiting review/merge to nixpkgs is extensive, and the (very!) high bar to getting a commit bit helps keep it that way.
– Charles Duffy
Jul 15 at 15:07
add a comment |
"Reasonably Safe" depends on your goalposts, but curl | bash
is well behind state-of-the-art.
Let's take a look at the kind of verification one might want:
- Ensuring that someone malicious at your ISP can't do a man-in-the-middle to feed you arbitrary code.
- Ensuring that you're getting the same binaries the author published
- Ensuring you're getting the same binaries that someone downloading the same filename also got.
- Ensuring that the binaries you download reflect a specific, auditable set of sources and build steps, and can be reproduced from same.
- Separating installing software from running software -- if you're installing software to be run by an untrusted, low-privileged user, no high-privileged account should be put at risk in the process.
With curl | sudo bash
, you get only the first if that; with rpm
or dpkg
you get some of them; with nix
, you can get all of them.
Using
curl
to download viahttps
, you have some safety against a man-in-the-middle attacker, insofar as that attacker can't forge a certificate and key that's valid for the remote site. (You don't have safety against an attacker who broke into the remote server, or one who has access to the local CA your company put into all CA store lists on corporate-owned-hardware so they could MITM outgoing SSL connections for intentional "security" purposes!).This is the only threat model
curl | sudo bash
sometimes is successful at protecting you against.Ensuring that you're getting the same binaries the author published can be done with a digital signature by that author (Linux distributions typically distribute a keychain of OpenPGP keys belonging to individuals authorized to publish packages to that distribution, or have a key they use for packages they built themselves, and use access control measures to restrict which authors are able to get packages into their build systems).
Deployed correctly,
rpm
ordpkg
gives you this safety;curl | bash
does not.Ensuring that requesting the same name always returns the same binaries is trickier, if an authorized author's key could have been captured. This can be accomplished, however, if the content you're downloading is hash-addressed; to publish different content under the same name, an attacker would need to either decouple the inputs from the hash from the file's contents (trivially detected if it's the hash of the binary that's published.
Moving to hash-addressed build publication has two possible approaches:
If the hash is of the outputs of the build, an attacker's easiest approach is to find the mechanism by which the end-user looked up that hash and replace it with a malicious value -- so the point-of-attack moves, but the vulnerability itself does not.
If the hash is of the inputs to the build, checking that the output of the build genuinely matches those inputs requires more work (namely, rerunning the build!) to be done to check, but that check becomes far harder to evade.
The latter approach is the better one, even though it's expensive to check and puts extra work on the folks doing software packaging (to deal with any places the author of a program folded in timestamps or other non-reproducible elements to build process itself).
Dealing with malicious authors is not in the security model that
rpm
ordpkg
tries to address, and of course,curl | bash
doesn't do anything about it either.Separating installation from runtime is a matter of designing the serialization format up-front without dangerous features -- not supporting
setuid
orsetgid
bits, not supporting install-time unsandboxed run scripts with arbitrary code, etc.curl | sudo bash
gives you no protection here, butrpm
anddpkg
also don't.nix
, by contrast, lets any unprivileged user install software into the store -- but the NAR serialization format it uses won't represent setuid or setgid bits, that content in the store is unreferenced by any user account that doesn't explicitly request it or a piece of software that depends on it, and cases where software needssetuid
privileges to be installed require explicit out-of-band administrative action before those bits actually get set.Only oddball, niche, specialty software installation methods like
nix
get this right.
Keep in mind that unless the keys for the specific project come with your distro -- not impossible, obviously, but not necessarily likely -- you'll have to get them from somewhere. That "somewhere" currently defaults to a broken keyserver network. It would be worth mentioning other trusted methods. Also, you can effectively client-side pin a specific certificate withcurl
, to remove that one attack (not the others). Finally,curl | bash
is more likely to be up-to-date than anything from a package manager, precisely because it's so uncontrolled. Probably worth a mention if nothing else.
– Nic Hartley
Jul 15 at 14:15
Aside from those, which amount to minor nitpicks, I like this answer for offering a safer alternative to even "normal" installation methods, which even the currently highest-voted answer doesn't do. If only the author could have figured out how to incorporate smoothly.
– Nic Hartley
Jul 15 at 14:16
Oh oops, you do address my first point, I just missed that paragraph. Sorry. Ignore that bit.
– Nic Hartley
Jul 15 at 14:19
1
I do agree that keeping things up-to-date is a pain; the backlog of PRs awaiting review/merge to nixpkgs is extensive, and the (very!) high bar to getting a commit bit helps keep it that way.
– Charles Duffy
Jul 15 at 15:07
add a comment |
"Reasonably Safe" depends on your goalposts, but curl | bash
is well behind state-of-the-art.
Let's take a look at the kind of verification one might want:
- Ensuring that someone malicious at your ISP can't do a man-in-the-middle to feed you arbitrary code.
- Ensuring that you're getting the same binaries the author published
- Ensuring you're getting the same binaries that someone downloading the same filename also got.
- Ensuring that the binaries you download reflect a specific, auditable set of sources and build steps, and can be reproduced from same.
- Separating installing software from running software -- if you're installing software to be run by an untrusted, low-privileged user, no high-privileged account should be put at risk in the process.
With curl | sudo bash
, you get only the first if that; with rpm
or dpkg
you get some of them; with nix
, you can get all of them.
Using
curl
to download viahttps
, you have some safety against a man-in-the-middle attacker, insofar as that attacker can't forge a certificate and key that's valid for the remote site. (You don't have safety against an attacker who broke into the remote server, or one who has access to the local CA your company put into all CA store lists on corporate-owned-hardware so they could MITM outgoing SSL connections for intentional "security" purposes!).This is the only threat model
curl | sudo bash
sometimes is successful at protecting you against.Ensuring that you're getting the same binaries the author published can be done with a digital signature by that author (Linux distributions typically distribute a keychain of OpenPGP keys belonging to individuals authorized to publish packages to that distribution, or have a key they use for packages they built themselves, and use access control measures to restrict which authors are able to get packages into their build systems).
Deployed correctly,
rpm
ordpkg
gives you this safety;curl | bash
does not.Ensuring that requesting the same name always returns the same binaries is trickier, if an authorized author's key could have been captured. This can be accomplished, however, if the content you're downloading is hash-addressed; to publish different content under the same name, an attacker would need to either decouple the inputs from the hash from the file's contents (trivially detected if it's the hash of the binary that's published.
Moving to hash-addressed build publication has two possible approaches:
If the hash is of the outputs of the build, an attacker's easiest approach is to find the mechanism by which the end-user looked up that hash and replace it with a malicious value -- so the point-of-attack moves, but the vulnerability itself does not.
If the hash is of the inputs to the build, checking that the output of the build genuinely matches those inputs requires more work (namely, rerunning the build!) to be done to check, but that check becomes far harder to evade.
The latter approach is the better one, even though it's expensive to check and puts extra work on the folks doing software packaging (to deal with any places the author of a program folded in timestamps or other non-reproducible elements to build process itself).
Dealing with malicious authors is not in the security model that
rpm
ordpkg
tries to address, and of course,curl | bash
doesn't do anything about it either.Separating installation from runtime is a matter of designing the serialization format up-front without dangerous features -- not supporting
setuid
orsetgid
bits, not supporting install-time unsandboxed run scripts with arbitrary code, etc.curl | sudo bash
gives you no protection here, butrpm
anddpkg
also don't.nix
, by contrast, lets any unprivileged user install software into the store -- but the NAR serialization format it uses won't represent setuid or setgid bits, that content in the store is unreferenced by any user account that doesn't explicitly request it or a piece of software that depends on it, and cases where software needssetuid
privileges to be installed require explicit out-of-band administrative action before those bits actually get set.Only oddball, niche, specialty software installation methods like
nix
get this right.
"Reasonably Safe" depends on your goalposts, but curl | bash
is well behind state-of-the-art.
Let's take a look at the kind of verification one might want:
- Ensuring that someone malicious at your ISP can't do a man-in-the-middle to feed you arbitrary code.
- Ensuring that you're getting the same binaries the author published
- Ensuring you're getting the same binaries that someone downloading the same filename also got.
- Ensuring that the binaries you download reflect a specific, auditable set of sources and build steps, and can be reproduced from same.
- Separating installing software from running software -- if you're installing software to be run by an untrusted, low-privileged user, no high-privileged account should be put at risk in the process.
With curl | sudo bash
, you get only the first if that; with rpm
or dpkg
you get some of them; with nix
, you can get all of them.
Using
curl
to download viahttps
, you have some safety against a man-in-the-middle attacker, insofar as that attacker can't forge a certificate and key that's valid for the remote site. (You don't have safety against an attacker who broke into the remote server, or one who has access to the local CA your company put into all CA store lists on corporate-owned-hardware so they could MITM outgoing SSL connections for intentional "security" purposes!).This is the only threat model
curl | sudo bash
sometimes is successful at protecting you against.Ensuring that you're getting the same binaries the author published can be done with a digital signature by that author (Linux distributions typically distribute a keychain of OpenPGP keys belonging to individuals authorized to publish packages to that distribution, or have a key they use for packages they built themselves, and use access control measures to restrict which authors are able to get packages into their build systems).
Deployed correctly,
rpm
ordpkg
gives you this safety;curl | bash
does not.Ensuring that requesting the same name always returns the same binaries is trickier, if an authorized author's key could have been captured. This can be accomplished, however, if the content you're downloading is hash-addressed; to publish different content under the same name, an attacker would need to either decouple the inputs from the hash from the file's contents (trivially detected if it's the hash of the binary that's published.
Moving to hash-addressed build publication has two possible approaches:
If the hash is of the outputs of the build, an attacker's easiest approach is to find the mechanism by which the end-user looked up that hash and replace it with a malicious value -- so the point-of-attack moves, but the vulnerability itself does not.
If the hash is of the inputs to the build, checking that the output of the build genuinely matches those inputs requires more work (namely, rerunning the build!) to be done to check, but that check becomes far harder to evade.
The latter approach is the better one, even though it's expensive to check and puts extra work on the folks doing software packaging (to deal with any places the author of a program folded in timestamps or other non-reproducible elements to build process itself).
Dealing with malicious authors is not in the security model that
rpm
ordpkg
tries to address, and of course,curl | bash
doesn't do anything about it either.Separating installation from runtime is a matter of designing the serialization format up-front without dangerous features -- not supporting
setuid
orsetgid
bits, not supporting install-time unsandboxed run scripts with arbitrary code, etc.curl | sudo bash
gives you no protection here, butrpm
anddpkg
also don't.nix
, by contrast, lets any unprivileged user install software into the store -- but the NAR serialization format it uses won't represent setuid or setgid bits, that content in the store is unreferenced by any user account that doesn't explicitly request it or a piece of software that depends on it, and cases where software needssetuid
privileges to be installed require explicit out-of-band administrative action before those bits actually get set.Only oddball, niche, specialty software installation methods like
nix
get this right.
edited Jul 15 at 12:16
answered Jul 15 at 12:11
Charles DuffyCharles Duffy
3592 silver badges9 bronze badges
3592 silver badges9 bronze badges
Keep in mind that unless the keys for the specific project come with your distro -- not impossible, obviously, but not necessarily likely -- you'll have to get them from somewhere. That "somewhere" currently defaults to a broken keyserver network. It would be worth mentioning other trusted methods. Also, you can effectively client-side pin a specific certificate withcurl
, to remove that one attack (not the others). Finally,curl | bash
is more likely to be up-to-date than anything from a package manager, precisely because it's so uncontrolled. Probably worth a mention if nothing else.
– Nic Hartley
Jul 15 at 14:15
Aside from those, which amount to minor nitpicks, I like this answer for offering a safer alternative to even "normal" installation methods, which even the currently highest-voted answer doesn't do. If only the author could have figured out how to incorporate smoothly.
– Nic Hartley
Jul 15 at 14:16
Oh oops, you do address my first point, I just missed that paragraph. Sorry. Ignore that bit.
– Nic Hartley
Jul 15 at 14:19
1
I do agree that keeping things up-to-date is a pain; the backlog of PRs awaiting review/merge to nixpkgs is extensive, and the (very!) high bar to getting a commit bit helps keep it that way.
– Charles Duffy
Jul 15 at 15:07
add a comment |
Keep in mind that unless the keys for the specific project come with your distro -- not impossible, obviously, but not necessarily likely -- you'll have to get them from somewhere. That "somewhere" currently defaults to a broken keyserver network. It would be worth mentioning other trusted methods. Also, you can effectively client-side pin a specific certificate withcurl
, to remove that one attack (not the others). Finally,curl | bash
is more likely to be up-to-date than anything from a package manager, precisely because it's so uncontrolled. Probably worth a mention if nothing else.
– Nic Hartley
Jul 15 at 14:15
Aside from those, which amount to minor nitpicks, I like this answer for offering a safer alternative to even "normal" installation methods, which even the currently highest-voted answer doesn't do. If only the author could have figured out how to incorporate smoothly.
– Nic Hartley
Jul 15 at 14:16
Oh oops, you do address my first point, I just missed that paragraph. Sorry. Ignore that bit.
– Nic Hartley
Jul 15 at 14:19
1
I do agree that keeping things up-to-date is a pain; the backlog of PRs awaiting review/merge to nixpkgs is extensive, and the (very!) high bar to getting a commit bit helps keep it that way.
– Charles Duffy
Jul 15 at 15:07
Keep in mind that unless the keys for the specific project come with your distro -- not impossible, obviously, but not necessarily likely -- you'll have to get them from somewhere. That "somewhere" currently defaults to a broken keyserver network. It would be worth mentioning other trusted methods. Also, you can effectively client-side pin a specific certificate with
curl
, to remove that one attack (not the others). Finally, curl | bash
is more likely to be up-to-date than anything from a package manager, precisely because it's so uncontrolled. Probably worth a mention if nothing else.– Nic Hartley
Jul 15 at 14:15
Keep in mind that unless the keys for the specific project come with your distro -- not impossible, obviously, but not necessarily likely -- you'll have to get them from somewhere. That "somewhere" currently defaults to a broken keyserver network. It would be worth mentioning other trusted methods. Also, you can effectively client-side pin a specific certificate with
curl
, to remove that one attack (not the others). Finally, curl | bash
is more likely to be up-to-date than anything from a package manager, precisely because it's so uncontrolled. Probably worth a mention if nothing else.– Nic Hartley
Jul 15 at 14:15
Aside from those, which amount to minor nitpicks, I like this answer for offering a safer alternative to even "normal" installation methods, which even the currently highest-voted answer doesn't do. If only the author could have figured out how to incorporate smoothly.
– Nic Hartley
Jul 15 at 14:16
Aside from those, which amount to minor nitpicks, I like this answer for offering a safer alternative to even "normal" installation methods, which even the currently highest-voted answer doesn't do. If only the author could have figured out how to incorporate smoothly.
– Nic Hartley
Jul 15 at 14:16
Oh oops, you do address my first point, I just missed that paragraph. Sorry. Ignore that bit.
– Nic Hartley
Jul 15 at 14:19
Oh oops, you do address my first point, I just missed that paragraph. Sorry. Ignore that bit.
– Nic Hartley
Jul 15 at 14:19
1
1
I do agree that keeping things up-to-date is a pain; the backlog of PRs awaiting review/merge to nixpkgs is extensive, and the (very!) high bar to getting a commit bit helps keep it that way.
– Charles Duffy
Jul 15 at 15:07
I do agree that keeping things up-to-date is a pain; the backlog of PRs awaiting review/merge to nixpkgs is extensive, and the (very!) high bar to getting a commit bit helps keep it that way.
– Charles Duffy
Jul 15 at 15:07
add a comment |
One option would be to attempt to do behavioural analysis of what the results are, by running the curl command separately to fetch a copy of whatever the script is.
Then run it in a linux vm and watch what connections out happen etc, you could even run file integrity monitoring on the system and see what's altered when it runs.
Ultimately, the context is important that this behaviour could lead to compromise, but isn't especially worse than many of the other methods by which people get software. Even with the behavioural analysis I mentioned above, you're limited by the secondary sources the script may retrieve from, which could be dynamic too - but so are the dependencies of real software, so at some level, you have to rely on trust of the source to not link something bad.
1
Running curl separately won't necessarily give the same result. There's been software developed to detect whether there's a shell on the other end by looking at how quickly chunks of the script are retrieved by the downloading end, and inject malicious code at the end of the file if-and-only-if the timing patterns look like it's being directly piped to a shell; see idontplaydarts.com/2016/04/detecting-curl-pipe-bash-server-side
– Charles Duffy
Jul 14 at 11:48
@CharlesDuffy If you run the script in the VM, then run the script you downloaded again on your main machine, assuming they can't detect a VM then it should be safe. (Big assumption, I know, but at some point you should just stop trusting that server and find another installation method...)
– Nic Hartley
Jul 14 at 17:01
1
You may avoid things due to careless scripts, but a malicious one will not make obvious that it is doing malicious things. You will just have the rootkit in both the VM and your system afterwards, e.g. when it activates itself with a delay of three days, waits for incoming connections instead of creating connections itself or uses other hiding techniques. It could be worth trying tools like maybe, but note that they may not provide the best security (i.e. their sandbox is not be perfect) either.
– allo
Jul 15 at 8:47
add a comment |
One option would be to attempt to do behavioural analysis of what the results are, by running the curl command separately to fetch a copy of whatever the script is.
Then run it in a linux vm and watch what connections out happen etc, you could even run file integrity monitoring on the system and see what's altered when it runs.
Ultimately, the context is important that this behaviour could lead to compromise, but isn't especially worse than many of the other methods by which people get software. Even with the behavioural analysis I mentioned above, you're limited by the secondary sources the script may retrieve from, which could be dynamic too - but so are the dependencies of real software, so at some level, you have to rely on trust of the source to not link something bad.
1
Running curl separately won't necessarily give the same result. There's been software developed to detect whether there's a shell on the other end by looking at how quickly chunks of the script are retrieved by the downloading end, and inject malicious code at the end of the file if-and-only-if the timing patterns look like it's being directly piped to a shell; see idontplaydarts.com/2016/04/detecting-curl-pipe-bash-server-side
– Charles Duffy
Jul 14 at 11:48
@CharlesDuffy If you run the script in the VM, then run the script you downloaded again on your main machine, assuming they can't detect a VM then it should be safe. (Big assumption, I know, but at some point you should just stop trusting that server and find another installation method...)
– Nic Hartley
Jul 14 at 17:01
1
You may avoid things due to careless scripts, but a malicious one will not make obvious that it is doing malicious things. You will just have the rootkit in both the VM and your system afterwards, e.g. when it activates itself with a delay of three days, waits for incoming connections instead of creating connections itself or uses other hiding techniques. It could be worth trying tools like maybe, but note that they may not provide the best security (i.e. their sandbox is not be perfect) either.
– allo
Jul 15 at 8:47
add a comment |
One option would be to attempt to do behavioural analysis of what the results are, by running the curl command separately to fetch a copy of whatever the script is.
Then run it in a linux vm and watch what connections out happen etc, you could even run file integrity monitoring on the system and see what's altered when it runs.
Ultimately, the context is important that this behaviour could lead to compromise, but isn't especially worse than many of the other methods by which people get software. Even with the behavioural analysis I mentioned above, you're limited by the secondary sources the script may retrieve from, which could be dynamic too - but so are the dependencies of real software, so at some level, you have to rely on trust of the source to not link something bad.
One option would be to attempt to do behavioural analysis of what the results are, by running the curl command separately to fetch a copy of whatever the script is.
Then run it in a linux vm and watch what connections out happen etc, you could even run file integrity monitoring on the system and see what's altered when it runs.
Ultimately, the context is important that this behaviour could lead to compromise, but isn't especially worse than many of the other methods by which people get software. Even with the behavioural analysis I mentioned above, you're limited by the secondary sources the script may retrieve from, which could be dynamic too - but so are the dependencies of real software, so at some level, you have to rely on trust of the source to not link something bad.
answered Jul 13 at 4:42
pacifistpacifist
7393 silver badges8 bronze badges
7393 silver badges8 bronze badges
1
Running curl separately won't necessarily give the same result. There's been software developed to detect whether there's a shell on the other end by looking at how quickly chunks of the script are retrieved by the downloading end, and inject malicious code at the end of the file if-and-only-if the timing patterns look like it's being directly piped to a shell; see idontplaydarts.com/2016/04/detecting-curl-pipe-bash-server-side
– Charles Duffy
Jul 14 at 11:48
@CharlesDuffy If you run the script in the VM, then run the script you downloaded again on your main machine, assuming they can't detect a VM then it should be safe. (Big assumption, I know, but at some point you should just stop trusting that server and find another installation method...)
– Nic Hartley
Jul 14 at 17:01
1
You may avoid things due to careless scripts, but a malicious one will not make obvious that it is doing malicious things. You will just have the rootkit in both the VM and your system afterwards, e.g. when it activates itself with a delay of three days, waits for incoming connections instead of creating connections itself or uses other hiding techniques. It could be worth trying tools like maybe, but note that they may not provide the best security (i.e. their sandbox is not be perfect) either.
– allo
Jul 15 at 8:47
add a comment |
1
Running curl separately won't necessarily give the same result. There's been software developed to detect whether there's a shell on the other end by looking at how quickly chunks of the script are retrieved by the downloading end, and inject malicious code at the end of the file if-and-only-if the timing patterns look like it's being directly piped to a shell; see idontplaydarts.com/2016/04/detecting-curl-pipe-bash-server-side
– Charles Duffy
Jul 14 at 11:48
@CharlesDuffy If you run the script in the VM, then run the script you downloaded again on your main machine, assuming they can't detect a VM then it should be safe. (Big assumption, I know, but at some point you should just stop trusting that server and find another installation method...)
– Nic Hartley
Jul 14 at 17:01
1
You may avoid things due to careless scripts, but a malicious one will not make obvious that it is doing malicious things. You will just have the rootkit in both the VM and your system afterwards, e.g. when it activates itself with a delay of three days, waits for incoming connections instead of creating connections itself or uses other hiding techniques. It could be worth trying tools like maybe, but note that they may not provide the best security (i.e. their sandbox is not be perfect) either.
– allo
Jul 15 at 8:47
1
1
Running curl separately won't necessarily give the same result. There's been software developed to detect whether there's a shell on the other end by looking at how quickly chunks of the script are retrieved by the downloading end, and inject malicious code at the end of the file if-and-only-if the timing patterns look like it's being directly piped to a shell; see idontplaydarts.com/2016/04/detecting-curl-pipe-bash-server-side
– Charles Duffy
Jul 14 at 11:48
Running curl separately won't necessarily give the same result. There's been software developed to detect whether there's a shell on the other end by looking at how quickly chunks of the script are retrieved by the downloading end, and inject malicious code at the end of the file if-and-only-if the timing patterns look like it's being directly piped to a shell; see idontplaydarts.com/2016/04/detecting-curl-pipe-bash-server-side
– Charles Duffy
Jul 14 at 11:48
@CharlesDuffy If you run the script in the VM, then run the script you downloaded again on your main machine, assuming they can't detect a VM then it should be safe. (Big assumption, I know, but at some point you should just stop trusting that server and find another installation method...)
– Nic Hartley
Jul 14 at 17:01
@CharlesDuffy If you run the script in the VM, then run the script you downloaded again on your main machine, assuming they can't detect a VM then it should be safe. (Big assumption, I know, but at some point you should just stop trusting that server and find another installation method...)
– Nic Hartley
Jul 14 at 17:01
1
1
You may avoid things due to careless scripts, but a malicious one will not make obvious that it is doing malicious things. You will just have the rootkit in both the VM and your system afterwards, e.g. when it activates itself with a delay of three days, waits for incoming connections instead of creating connections itself or uses other hiding techniques. It could be worth trying tools like maybe, but note that they may not provide the best security (i.e. their sandbox is not be perfect) either.
– allo
Jul 15 at 8:47
You may avoid things due to careless scripts, but a malicious one will not make obvious that it is doing malicious things. You will just have the rootkit in both the VM and your system afterwards, e.g. when it activates itself with a delay of three days, waits for incoming connections instead of creating connections itself or uses other hiding techniques. It could be worth trying tools like maybe, but note that they may not provide the best security (i.e. their sandbox is not be perfect) either.
– allo
Jul 15 at 8:47
add a comment |
No, it's not as safe. Your download can fail in the middle.
If your download fails in the middle then you'll have run a partial script, which can potentially fail to do some operations that it was supposed to do (cleanup, configuration, etc.).
It's not likely if the script is small or your connection is fast, but it's possible, especially on a slow connection.
This is an example of the difference between safety and security. :)
add a comment |
No, it's not as safe. Your download can fail in the middle.
If your download fails in the middle then you'll have run a partial script, which can potentially fail to do some operations that it was supposed to do (cleanup, configuration, etc.).
It's not likely if the script is small or your connection is fast, but it's possible, especially on a slow connection.
This is an example of the difference between safety and security. :)
add a comment |
No, it's not as safe. Your download can fail in the middle.
If your download fails in the middle then you'll have run a partial script, which can potentially fail to do some operations that it was supposed to do (cleanup, configuration, etc.).
It's not likely if the script is small or your connection is fast, but it's possible, especially on a slow connection.
This is an example of the difference between safety and security. :)
No, it's not as safe. Your download can fail in the middle.
If your download fails in the middle then you'll have run a partial script, which can potentially fail to do some operations that it was supposed to do (cleanup, configuration, etc.).
It's not likely if the script is small or your connection is fast, but it's possible, especially on a slow connection.
This is an example of the difference between safety and security. :)
answered Jul 15 at 23:14
MehrdadMehrdad
1,5051 gold badge13 silver badges22 bronze badges
1,5051 gold badge13 silver badges22 bronze badges
add a comment |
add a comment |
Thanks for contributing an answer to Information Security Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsecurity.stackexchange.com%2fquestions%2f213401%2fis-curl-something-sudo-bash-a-reasonably-safe-installation-method%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
3
This makes you trust the server you downloaded from -- note that normally, you don't need to trust the server, because if you're downloading an RPM or deb from your distro, it's signed, so you can just trust the signature to ensure that you have a genuine package even if an attacker controls the mirror/server you downloaded it from, or if that attacker controls your ISP and is substituting their own host, etc.
– Charles Duffy
Jul 13 at 16:03
3
Note too that it's very possible to detect whether code is being piped to bash (via timing analysis), so folks can give different download results for code being saved for inspection vs code being run directly.
– Charles Duffy
Jul 13 at 16:05
3
blog.taz.net.au/2018/03/07/brawndo-installer - it's got what users crave.
alias brawndo='curl $1 | sudo bash'
– cas
Jul 15 at 4:13
1
While you asked specifically regarding "safety" (and this is security.se after all), I'd like to mention that there might be other interesting factors besides safety when evaluating an installation method (examples: can you find out later what was installed? Can you uninstall easily and reliably? Are you notified about security updates? Can you install different versions of the same software on one system?)
– oliver
Jul 15 at 12:50
2
@cas,
curl $1
doesn't look at an argument to thebrawndo
alias, it looks at your shell's current argument list, which for interactive shells is usually empty. You probably want a function:brawndo() sudo bash;
-- or, to pass arguments past the first to the received script:brawndo() sudo bash -s "$@";
(of course, all that is said with my shell hat on; with my security hat, don't do any of this).– Charles Duffy
Jul 15 at 15:00