Many people have not discovered the difference between cloud computing architecture and traditional servers, then we put in five easy steps:
A traditional server must contain a minimum of resources being used by the operating system, so if the system uses 10% of the server, the other 90 % are always available for services (web, FTP, email, etc.)
This causes the server is underutilized in most of the time but still consuming power, network, temperature and cooling, increasing costs. In the cloud computing, you use the resources they need, avoiding waste in idleness.
A virtualized server is divided into a cloud of computing resources, so your site does not leave the air in case of failure of some component of the physical server (or even fires and disasters in the datacenter) and would be in a traditional server.
If you have demand access variable (hours with thousands of hits and more quiet), need not maintain a super-machine that will be idle and not hire one that can not take the hits, virtualization makes the process of upgrade / downgrade immediately and site is always in the air. The same if you have demand for writing data to disk, you increase the space instantly and without stopping the machine. You also eliminates the bureaucracy to upgrade the physical facilities and saves time.
With all the resources being distributed and scalable in cloud, the cloud makes it easy to create applications that can be accessed remotely by mobile devices or in an economically feasible, as in the traditional architecture of the costs (in comparison) would be prohibitive.
Keeping a baseline to make your application run and having the possibility of increasing the resources according to demand, and lower easily without physical changes, you warrant that you will not pay more by not using and not spend too much when you need it, is the way more streamlined and cost nowadays.