Wednesday, March 6, 2013

Cloud Gaming Explained

The next generation of consoles is almost upon us. Before we save up for that glorious new $500 gaming experience, it's a good idea to understand just what we are paying for. A big part of the next generation is Cloud Gaming and Streaming Games. These innovations are very exciting, but not everyone understands what they actually mean.

All previous generations of consoles have been restricted by the power of the client (the actual hardware device.) This is because our console is dedicated to doing all the required work to get the game to function. It needs to be powerful enough to process the physics, control the AI, perform collision detection, render complex HD scenes, etc. So it is only reasonable to assume that every device has to be powerful enough to actually run the game we want to play... right?

Not anymore. Average network speeds are moving up around the globe, and cloud technologies are stabilizing, standardizing  and taking hold in every industry -- including gaming. We are entering a new era in what is possible by leveraging these strengths. My first brush with this concept was way back in the 1990s. At that time, it was common to see "dumb terminals" in schools, computer labs, and libraries across the US. These were very simple machines that hooked up to a central computer via a serial port and provided a rudimentary text console. The devices themselves lacked much capability. They could turn on and proxy data through the serial port, and print things to the amber or green monochromatic display. All the work was done on the back-end server, and the thin client had just enough horsepower to allow user interaction.That simple concept: "pushing all the work to the server" is the basis of  Cloud Gaming.

In the 2000s we began to see some kewl browser based games powered by flash or JAVA. Unfortunately, there was no elegant way at that time to leverage the graphic hardware capabilities of the host. Finally, WebGL was introduced in 2010 (fist stable release was 2011). It provided a new standard with OpenGL and HTML5 Canvas integration, javascript API, and hardware GPU acceleration. It's now a cross-platform, royalty-free standard built into most web browsers. I became interested in seeing the possibilities of WebGL right away. I scoured the net looking for something to provide good showcase, and I came across a nifty project called quake2-gwt-port. I have a screencast below which I made in April, 2010. I was running the server on the localhost, using a test release of chrome, and while there is no sound in the video it was playing perfectly for me through HTML5 <audio> elements!

WebGL Quake II

This is a great example of "how" Cloud Gaming will work. Your console will have to shoulder much less of the responsibility. It will communicate through some proprietary protocol to servers in the cloud which do all of the heavy lifting. Your device just needs enough power to display the interface, and transmit user interaction. If a web browser can do this, imagine what a specifically cloud-designed console could do! The technology evolution to cloud gaming will allow these future devices to be cheaper, smaller (think iPhone sized), and have a much longer life span. Their internal technology could remain static (even get cheaper), while the content they provide has the potential to become infinitely more complex and powerful.

Cloud Streaming is how companies like Sony plan to tie this into a business model. They will most likely provide a subscription service which gives users access to a huge library of games, much like Netflix does for movies. When a user selects a game to play, a properly sized cloud instance will spin up (in a nearby availability zone) and begin transmitting the content to the user's console. This provides some deeply interesting cloud-based cost models for the provider. Time will tell if those models pay off, but I have a feeling they will.

If you are like me, you're probably wondering how you can check out some of that cloud gaming awesomeness right now! Well, you can download the Quake II port at the link above and stick it on a cloud instance. I'll be doing that myself later in the week, and I'll post a brief howto. I'm also playing around with a tool called emscripten that compiles C++ into javascript. I want to get a cloud-ified ScummVM (or some other emulator) up and running in the cloud, and see what the end-user experience is like. I'll keep the blog updated with my adventures.   


Friday, March 1, 2013

Fedora 18: Encrypting Your Home Directory

There are a number of steps for encrypting your home directory in Fedora, and enabling system applications like GDM to decrypt your files on login. I'll walk through the process of how I got this set up on my own machine.

First, make sure you have ecryptfs and related packages installed:

# yum install keyutils ecryptfs-utils pam_mount

Then you can either go the easy way:

# authconfig --enableecryptfs --updateall
# usermod -aG ecryptfs USER
# ecryptfs-migrate-home -u USER
# su - USER
$ ecryptfs-unwrap-passphrase ~/.ecryptfs/wrapped-passphrase (write this down for safe keeping)
$ ecryptfs-insert-wrapped-passphrase-into-keyring ~/.ecryptfs/wrapped-passphrase

[All done! Now you can log in via GDM or the console ("su - user" will not work without running ecryptfs-mount-private)]

OR the hard way, which I followed. There are some benefits of going this route. It is a much more configurable install which allows you to select the cipher and key strength:

First enable ecryptfs:

# authconfig --enableecryptfs --updateall

Move your home directory out of the way, and make a new one:

# mv /home/user /home/user.old
# mkdir -m 700 /home/user
# chown user:user /home/user
# usermod -d /home/user.old user

Make a nice random-ish passphrase for your encryption:

# < /dev/urandom tr -cd \[:graph:\] | fold -w 64 | head -n 1 > /root/ecryptfs-passphrase

Mount the new /home/user with ecryptfs:

# mount -t ecryptfs /home/user /home/user
(choose passphrase, any cipher, any strength, plain text pass through, and encrypt file names)
# mount |grep ecryptfs > /root/ecryptfs_mount_options

Add to the /etc/fstab (with the mount options from ecryptfs_mount_options above, plus those in bold) like so:

/home/syncomm /home/syncomm ecryptfs rw,user,noauto,exec,relatime,ecryptfs_fnek_sig=113c5eeef8a05729,ecryptfs_sig=113c5e8ef7a05729,ecryptfs_cipher=aes,ecryptfs_key_bytes=32,ecryptfs_passthrough,ecryptfs_unlink_sigs 0 0

Wrap up the passphrase with the users login:

# ecryptfs-wrap-passphrase /root/.ecryptfs/wrapped-passphrase

Copy over files to the new home dir:

# su - user
$ rsync -aP /home/user.old/ /home/user/

Unmount /home/user and set up the initial files for ecryptfs and pam_mount:

# umount /home/user
# usermod -d /home/user user
# mkdir /home/user/.ecryptfs
# cp /root/.ecryptfs/sig-cache.txt /home/user/.ecryptfs
# cp /root/.ecryptfs/wrapped-passphrase /home/user/.ecryptfs
# touch /home/user/.ecryptfs/auto-mount
# touch /home/user/.ecryptfs/auto-umount
# chown -R user:user /home/user/.ecryptfs
# su - user -c "ecryptfs-insert-wrapped-passphrase-into-keyring /home/user/.ecryptfs/wrapped-passphrase"

Now it gets interesting! Edit /etc/pam.d/postlogin and add the lines in bold:

#%PAM-1.0
# This file is auto-generated.
# User changes will be destroyed the next time authconfig is run.
auth        optional      pam_ecryptfs.so unwrap
auth        optional      pam_permit.so
auth        optional      pam_mount.so
password    optional      pam_ecryptfs.so unwrap
session     optional      pam_ecryptfs.so unwrap
session     [success=1 default=ignore] pam_succeed_if.so service !~ gdm* service !~ su* quiet
session     [default=1]   pam_lastlog.so nowtmp silent
session     optional      pam_lastlog.so silent noupdate showfailed
session     optional      pam_mount.so

Edit /etc/security/pam_mount.conf.xml and replace the whole file with:

<?xml version="1.0" encoding="utf-8" ?>
<!DOCTYPE pam_mount SYSTEM "pam_mount.conf.xml.dtd">
<pam_mount>
<debug enable="0" />
<luserconf name=".pam_mount.conf.xml" />
<mntoptions allow="*" />
<mntoptions require="" />
<path>/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin</path>
<logout wait="0" hup="0" term="0" kill="0" />
<lclmount>/bin/mount -i %(VOLUME) "%(before=\"-o\" OPTIONS)"</lclmount>
</pam_mount>

Finally,

# su - user -c "vi /home/user/.pam_mount.conf.xml"

And add this:

<pam_mount>
<volume noroot="1" fstype="ecryptfs" path="/home/user" />
</pam_mount>

Now you can login and see your decrypted files! ("su - user" will not work without running ecryptfs-mount-private.)

You should setup swap encryption for both of these methods with:

# ecryptfs-setup-swap

If you want to go that extra mile, you can symbolically link your /home/user/.ecryptfs/wrapped-passphrase  to a flash drive and mount it at boot, or use autofs or some scripting to mount it on login (and just in time for PAM to access it.) However, if you are going to go that far you should look into more CIA level disk encryption, like TrueCrypt.