1

I executed this:

$ dd if=/dev/random of=foo bs=1G count=1
0+1 records in
0+1 records out
6 bytes (6 B) copied, 0.00016958 s, 35.4 kB/s

$ stat -c "%s" foo
6

Also this does not work, I get stuck in the command:

$ head -c 500 /dev/random > foo

What is my mistake?

I'm on Linux mint.

2
  • Possible duplicate of Generate a random filename in unix shell Commented May 2, 2016 at 19:51
  • 1
    This question was not about the filename but the content. Not a duplicate. Commented May 3, 2016 at 1:42

1 Answer 1

5

/dev/random wil just deliver random data until the entropy sources for the RNG are exhausted. If you want to read bigger chunks of (pseudo-)random data, you can read from /dev/urandom instead.

Sign up to request clarification or add additional context in comments.

4 Comments

/dev/urandom is very very slow. It takes about 1minute to generate about 30MB. Is it normal?
Is it possible to generate bad random file quickly? I just wanna get a dummy file to benchmark a file transfer over nfs.
For that purpose, it should be sufficient to create a smaller file and concatenat that a couple of times to create a larger one: dd if=/dev/urandom bs=1024 count=1024 >1m; cat 1m 1m 1m 1m 1m >5m; cat 5m 5m 5m 5m 5m >25m; cat 25m 25m 25m 25m>100m
That should indeed work for NFS. To generate test data for tools that are harder to fool like rsync or xz, you can use head -c 100m /dev/zero | openssl enc -aes-128-cbc -pass pass:"$(head -c 20 /dev/urandom | base64)" > my100MBfile.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.