Translate

Sunday, November 24, 2013

Artificial Intelligence: Technological Singularity

The concept of artificial intelligence (AI) has been around as long as the idea of machines and computers. People are fascinated by the idea that it is possible to code software that can “think” to a certain extent. Technologies with AI are all around us but we don't always think of them as AI. This thought can be attributed to all the movies with AI that are far more advanced than today's AI and/or that we have become used to AI being part of the world. Examples of AI currently are robots in car factories, automated customer services, Roomba vacuum cleaner, IBM's Watson, and self-parking cars. Currently the two major AI areas are voice-recognition software and self-driving cars. The major use of AI is to improve efficiency and to help humans with dangerous or difficult tasks. There are smart robots disabling land mines and handling radioactive materials.


As mentioned earlier, the AI technology available today is rather one-dimensional compared to what one can see in movies. AI is only as smart as the code that is uses. I don't think we are nowhere near in creating a truly “intelligent” AI, one that has the capabilities of human thought. Whether self-awareness can ever be achieved in a machine is debatable. One view is that if the Moore's law continues to hold then it's only a matter of time before humans create a machine with superhuman intelligence. This view was brought up by Vernon Vinge and he even went as far as saying that this will occur by the year 2030. If mankind ever develops software that will allow a machine to analyze data, make decisions and act autonomously then we can expect to see machines begin to design and build even better machines. In return, the new machines can build more powerful machines. Once these machines are able to improve themselves humans will become obsolete since the machines will have more intelligence then us, this point is called technological singularity. What will happen then?

Sources: 

Sunday, November 17, 2013

History of Computer Science: Cryptography with Digital Computers

                With the computer revolution came forth more advanced cryptographic techniques that were previously impossible or at the very least not very efficient. In 1948, Claude Shannon started the cryptographic revolution with his paper, A Communications Theory of Secrecy Systems. The published paper crowned Shannon as “The Father of Information Theory” because he applied advanced mathematical techniques to show and prove the security of cryptographic algorithms. The Lucifer cipher developed by Horst Feistel in the 1970’s while working for IBM paved the way for the symmetric key ciphers. By the mid 1970’s the computer revolution was at full strength and it became clear that digital data needed to be secured. At the time cryptography was a field only for the military and the government until the National Bureau of Standards called for cipher proposals. The only serious contender was the Lucifer cipher which the NBS handed to the government experts, the NSA, who modified the Lucifer cipher and created the Data Encryption Standard (DES). With the ever increasing computational powers of the computers over time DES has been replaced by Triple-DES and AES.



                During the same time the symmetric key cryptography was being developed another cryptographic technique was being born, public key cryptography. In 1976, Whitfield Diffie and Martin Hellman published a paper titled New Directions in Cryptography which introduced public key cryptography and one-way functions. Unlike symmetric keys which required the key to be shared before the communication was made, the Diffie-Hellman key exchange allowed making connections without prior key sharing. The one-way functions allowed the public key cryptosystem to flourish because one-way functions are easy to compute in one direction computationally infeasible in the other. The Diffie-Hellman inspired RSA which is stilled used today for public cryptography. RSA was published in 1977 by Ronald L. Rivest, Adi Shamir and Leonard M. Adleman. For internet security PGP was released in 1991 and it is still considered secure today. PGP uses public keys and doesn’t allow the sender to determine the decryption key even if the encryption key is known. Cryptography has become extremely important and will become more important as the power of computers increases along with the growth of digital data and the internet.

More detailed history here and here.
More info on different cryptography systems here.
Book I've been reading: Information Security: Principles and Practice by Mark Stamp.

Saturday, November 16, 2013

History of Computer Science: Cryptography Before Digital Computers

The beginning of cryptography was when humans spoke their first words. Even to this day a language can be considered a form of cryptography because if you don’t know the language another person is speaking you will have no idea what secrets they are talking about. This accounts for the use of written language as well since a majority of people, up until recently, were not able to read. Speaking and writing are easily breakable nowadays though. The Egyptian hieroglyphs could be considered a form of cryptography too as it used pictures to hide their stories. The first use of algorithms to secure a message was created by the Greeks who came up with the Spartan Scytale around 7th century B.C. Rods of different diameter were used to wrap a strip of parchment around it on which a message was written. The Caesar Cipher appeared during, you guessed it, Julius Caesar’s rule and was used for war (as was the Scytale). The Caesar Cipher, a monoalphabetic cipher, used simple substitution as a form of confusion. There was little advancement in cryptography until the Middle Ages but the Arabs did make headway in cryptoanalysis by using frequency analysis.



                In the 1500’s, Leon Battista Alberi, “The Father of Western Cryptology,” developed polyalphabetic substitution. Polyalphabetic uses multiple alphabets to hide the plaintext by allowing different ciphertext symbols to represent the same plaintext symbol. During the 16th century, Blaise de Vigènere made improvements to polyalphabetic substitution which was used until the Civil War. Around WWI codebook ciphers and the one-time pad showed up. The one-time pad was started by Gilbert Vernam and improved by Joseph Mauborgne. In the case of the one-time pad, if the key is truly random and used only once then it provides perfect secrecy. Arthur Scheribus invented the Enigma machine at the end of WWI, it was used commercially at first and then improved by the German government for use in WWII. The machine was broken by a Polish cryptologist, Marian Rejewksi, and his work was transferred over to Alan Turing and the code breakers at Bletchley Park to build Bombes which were electromechanical machines that were designed specifically to break Enigma.

More detailed history here and here.

More info on different cryptography systems here.

Sunday, November 10, 2013

File Sharing: Sharing is Caring

File sharing is what makes up the internet. Internet would not exist without it being possible to share files between applications and people. Whether you are browsing the internet, sending emails, or checking Facebook you are sharing files. The issue that comes into play when sharing files is security. For most of the files integrity is enough for sharing the files across the internet but for sensitive information the files have to have confidentiality and integrity. And if you are downloading files from third-party sources, torrents, or possibly even from Dropbox they could include viruses or malware. There are many layers of security required from both the host and the user end to make sure the files are safe and secure.
            One of the aspects of file sharing is checking the integrity of the file. When you upload or send a file someone could capture the packets and modify the file any way they desire. This is where integrity comes in and tells the parties involved whether the original file has been tampered with. The two most common methods for proving file integrity are the MD5 and SHA-1 hash functions. They compute a hash from all the packets sent but unfortunately they are not as secure as believed. The next level of security for file sharing is confidentiality. This requires files to be encrypted with a key and then sent out. The key is either a symmetric key established between the parties, a public key, or a session key if a connection was established (hopefully using a secure protocol). Then the files are encrypted with algorithms such as AES and DES. The files can have both integrity and confidentiality if executed properly.




            The last part I want to touch upon is downloading files from file sharing applications. Third-party sites and torrents are often tricky for the user because anyone could have uploaded a file with any name. Most common example I have seen is if you are looking to download a specific pdf file you may find a file with a similar name but instead of the file having a pdf extension it is an executable. One has to be very careful when the source is unknown or open to anyone. 

Sunday, November 3, 2013

Data Structures: Efficiency is Key!

                Data structures are one of the most integral parts of computer software. These days our computer processors can go through billions of calculations per second but searching, accessing, inserting, and deleting data can take a large chunk of the processing power. This is where data structures save the day, as long as they are built and implemented properly. Data structures gather the data storing and organizing it for efficiency. There are many different types of data structures and many different application uses for them. Arrays, lists, binary trees, heaps, b-trees, and hashes are all ways to manage data for an application. Data structures are also used to create efficient algorithms. In combination, data structures and the resulting algorithms save a great deal of processing power that can be used on more important tasks.


                The big O notation is used to analyze the efficiency of all the tasks that go along with data structures such as searching, inserting, deleting, and the amount of space they use. Most commonly the notation is used to describe the average and worst case scenarios. This is analogous to looking for noodles in a grocery store. The aisles are nicely divided by food categories so you go look in the pasta aisles and voila there it is the first item in the row. But it may be the case that you are at a brand new grocery store and the noodles you are looking for are at the end of the aisle so now you have to walk an extra 30 feet. Now to the actual notation. Common big O notations include O(1), O(n), O(n2), O(n log n) where O() is the big O and the time function is inside the parentheses. Each data structure has their own efficiency for each action.