Friday, October 29, 2010

7.3-7.5, Due on November 1

I am interested in how cryptography answers a variety of problems that go beyond just disguising messages. For example, I was intrigued by section 7.3 in regards to making sure Alice doesn't change her message and Bob doesn't read the message until after an event as occurred. In section 7.4 the authors claim that there is a situation involving elliptical curves where the decision Diffie-Hellman problem is solvable quickly which I am interested in. I recall getting a teaser in an earlier section about elliptical curves as well.
The hardest part for me was understanding the ElGamel cryptosystem. I understand the big idea because I understand public key cryptography (not completely, but mostly) but I don't see how the public and and private keys are derived in regards to the discrete logs problem. The RSA was easy because the private key was just two large primes, but I'm not sure what the private keys are in the ElGamel system.
P.S. My high school German teacher was named Mrs. El Gamel and she told me 'El Gamel' was Arabic for 'The Camel.'

Wednesday, October 27, 2010

7.2, Due on October 29

I enyoed reading about Baby Step, Giant Step (section 7.2.2). Not only did it remind me of Buzz Aldrin's famous quote, but it also reminded me of the meet-in-the-middle attacks from the earlier sections. I also liked reading this section in light of the material about factoring. Knowing that the past two weeks have been spent learning how to factor in order to break weak RSA systems helped me understand why we are learning all this information about discrete log problems. I'm guessing that the discrete logarithm problems are going to be used in a public key cryptosystem and all this information we are learning is going to help us crack weak versions of that system.
The two hardest subsections for me were 7.2.1 about the Pohlig-Hellman Algorithm and 7.2.3 about the Index Calculus. The examples were quite long and despite finally understanding the log notation I had difficulty understanding it in light of the examples.

Tuesday, October 26, 2010

6.5-6.7 and 7.1, Due on October 27

The most interesting portion of these sections for me was the idea of trapdoors. I liked reading about the variety of trapdoors available and how they differ according to the public key system. I also enjoyed reading about other types of public key cryptosystems besides the RSA. It's easy to avoid thinking outside the box but this section helped me see some other alternatives to the RSA.
One thing I didn't really understand was the discrete logs in section 7.1. I am familiar with the idea of logarithms but I don't understand the x=L[subalpha](Beta) notation. I also don't really see how the discrete logs apply to anything so far. I imagine their application will become more apparent in the ensuing sections. The other hard section for me was 6.6- An Application to Treaty Verification. I don't quite know what all is going on there.

Sunday, October 24, 2010

6.4.1, Due on October 24

I guess the thing I am most confused about is how we factor the numbers into smaller primes. The example of the basic principle on page 183 starts by saying 9398^2 = 5^5 * 19 (mod 3837523) but I don't really see how they got that or how I would factor an even bigger number. I believe, upon finishing the chapter, that they give the explanation on page 185 but I didn't quite see how it all related. I'm not sure where this equation of in+sqrt(in) + j^2 came from or how it helps us factor n into smaller primes.
The most interesting part of this section for me was the idea of using matrices. I enjoy linear algebra and I loved my Matrix Analysis class so I'm pretty eager to see additional applications of matrices. This matrix formed in this section must serve a purpose, although I have not successfully identified it yet. I understand how the matrix is formed, but I don't see where the linear independence comes in, or why we move to mod 2.

Thursday, October 21, 2010

6.4, Part I, Due on October 22

Fermat's Factorization was the most interesting portion of this quick section. It is so simple but very powerful for small numbers and it still took a genius to come up with it. I like it because I can understand it and it uses the idea of working backwards to factor (relatively) large numbers.
The p-1 Factoring Algorithm was the only other topic discussed, and it wasn't crystal clear to me. One question I have is how B is chosen. It's only description is that it is a bound but I'm not sure what it is bounding. This method looks a lot like the Miller-Rabin Primality Test  except it actually gives us factors instead of just determining if a number is prime or composite.

Monday, October 18, 2010

6.3, Due on October 20

This whole section was interesting to me and I enjoyed learning about ways to get around factoring numbers. I found the methods to be very intelligent solutions to a very difficult problem. It's wise of mathematicians to realize that it's a lot easier to see if a number is composite then to see if it's prime, and since a number can't be both you only need to check for one or the other condition. One thing I like about mathematics is that it uses efficient methods to solve difficult problems and this is definitely one such case. I also find it interestesting that as important as primes are we still have no way to determine if a very large number is prime. Mathematics is definitely a living field with intriguing problems still existing today.
The actual primality tests were difficult to follow. Fermat's test was easier to understand (at least the suppositions) but the latter two tests were much more difficult to read and follow. Their preceding arguments were bolstered by examples but I so not understand what exactly is going one with these tests.

Friday, October 15, 2010

3.10, Due on October 18

I enjoy this idea of square roots in modular arithmetic. Sometimes when I get a little confused in modular arithmetic I just remind myself what modular arithmetic actually means, and when I put the equation into 'regular' arithmetic it makes more sense. One question I have about the Legendre and Jacobi symbols is regarding their relationships to fractions. Is the fraction notation used just as an arbitrary symbol or is there a connection between the modular arithmetic and fractions? I'm guessing there is a connection but I haven't made it yet.
I became a little confused with the examples. One difficulty for me was that they kept referring to the steps of the Theorem on page 91 in the examples, and I kept having to flip between pages to understand the reasoning. While it may sound trivial, having to flip the pages over and over again made the example much harder to follow. In class we have the advantage of writing the theorem down on one board and doing an example on the other so that they may be seen simultaneously.