CS111 Lecture 19 (12/3/09)

By Jessica Wang, Andrew Lumbang

CLOUD COMPUTING

Brief History

Mainframes:

Mainframes were developed in the 1960s. Some notable companies which made and sold mainframes were IBM and Fugits. Mainframes are data-intensive: a problem with mainframes was geting data to the right spot. Some positive qualities of mainframes are that they optimize data, and are very reliable. Below is a diagram of the general structure of a mainframe.

figure1

Mainframes improved as time went by, but they were still pretty expensive.

Clusters:

Were developed in the 1990s, as a cheaper alternative to mainframes, because huge mainframes were too expensive. Clusters weren't as reliable as mainframes, but the extra reliability of mainframes wasn't worth the price.

figure2

Clusters are essentially composed of linux boxes all connected on the same IP network, they can all communicate with each other across the IP network. The IP network has a speed on the order of Gb. Clusters became very popular, and nowadays most computing is done with clusters, instead of mainframes or clouds. Some notable cluster systems are Beowulf and SGE(Sun Grid Engine). One thing about clusters is that the individual boxes don't have to be the same, the machines being used in the cluster can be heterogeneous (typically x86-64)

figure3


Clouds

SECURITY (continued)

Continuing from the previous lecture, we now want to get something that is simpler and easier to manage and understand.  Nevertheless, we still want the ability accurately prohibit bad accesses and allow good ones.
image
Techniques for doing so:
Traditional Unix

User      Group   Other
|  r w x  |  r w x  | r w x  |   

Original Unix

Berkeley Software Distribution (BSD)

image 


***only ROOT can create groups***

Access Control Lists (ACLs)
An owner of a resource can specify an access list-a list of principles and their permissions.  Typically used in Windows NT, Solaris, Samba, and now even Unix , ACLs add more flexibility but also complexity.

Ex. On a Solaris machine

$getfacl .
user: rwx
group: r_x
other: r_x
$setfacl .                             

Key Idea:
If you correctly set the default values of ACLs, when an object or resource is created, you will not have to use setfacl too often; the properties should have been correctly inherited.  In other words, if you set root default accurately, all its inherited directories will have the same property.

Problem:
$ sudo
# cd /bad/guy
# ls

All you wanted was ability to inspect file, not to run some program. 

 

Role-Based Access Control (RBAC)
Only really used in big popular products (Oracle, Solaris, and Active Directory)
Grants access to roles, not to the people

Ex. If you’re in backup role, you only get backup abilities.  Also works for poweroff, change grades, etc.

RBAC has a table that tells us that for every user, which roles they can assume.  Applications run c, but have li…  However, the downside of this method is that it is too complicated and is not really used in private. 

The cube is now:

image

Mechanisms for Enforcing Access Control

  1. ACLs: Most commonly used, ACLs associate with each object an attached inode that informs you of those who are allowed to use object.  All accesses are mediated by the OS and must ask permission from the OS before accessing it.  Through system calls, you will give the OS an ID.  The ACL is controlled by the OS.
  2. Capabilities approach: Records, for each user, the list of resources that users can access.  Capabilities are unforgeable object references that points to a set of access rights.  They are almost like an inverted ACL-each principal has a set of capabilities.  Every access must be examined, therefore needing the help of the OS and hardware.  As a result, if you screw up, the OS can catch it easier.

image

 

 

 

Neither of the approaches dominates over the other since both methods ensure unforgeability and has the OS checking all accesses.  However, on a network basis, capabilities approach is preferred.  In order to gain access, you need to send your credentials (they should be encrypted) over, which is exactly what capabilities method does.   Even if you follow ALC approach, you would end up molding it to the capabilities one.

 

TRUSTED SOFTWARE
OS doesn’t trust users and consequently applications, which run on behalf of users.  However, some programs do need to be trusted.  One such program is login()

  1. Prints “Login”                                                                                                    $ login
  2. Requires name and password
  3. Checks if password matches
  4. login becomes user name through setuid                                                             $ setuid (10976)

 

Running another process as another user is not a security breach in this case because setuid is a privilege syscall that checks if caller has access to change UID-has setuid bit set.

                                            -r-sr-xr-x                                            
s: when this file starts up running, it should start as owner (root) of file.

Since we have small software we trust, which ones do we trust?

How can we trust login?
Cryptographic checksum of the program

How does vendor trust login?
Look at login.c and confirm there are no dangerous parts

However, simply reading the source code does not a lways guarantee a working and safe program.  In his paper, “Reflections on Trusting Trust”, Ken Thompson proves how only reading code does not ensure integrity by forcing the GCC to misbehave. 

squares

Looking at the source code, it seems perfect.  There is not anything wrong with the c files.  However, it can actually break any code.  The only way to detect the bug is to disassemble the object code itself.

 

Trusted Computing Base

The trusted base should be as small as possible (according to K. Thompson, it’s bigger than you think) and should be kept secure.  It contains the kernel and root.