Saturday, November 30, 2013

Selection sort sorts a list by starting from the beginning of the list, taking the first element, and looks through the rest of the list to find an element smaller than the first element. If a smaller element is encountered, it takes this element and continues to check the list until it finds the smallest element in the whole list. Once the smallest element is found it's put to the front of the list, then repeats the process at the left most unsorted half of the list. The worst case and best case performances both have a running time algorithm of n^2.

Quick sort sorts a list by starting from a random element in the list and makes this the pivot of the list. Then from the first element and from the last element in the list, a check is made to see if the first element is bigger than the pivot element and if the last element is smaller than the pivot element. If one or both of the checks are false, then it goes to the next element closer to the pivot and does the same check. Once these 2 elements satisfy the check, they switch places and do the same check on the next 2 elements closer to the pivot. After every element on the left side is smaller than the pivot element and every element on the right side is bigger than the pivot element, the pivot element is in the correct sorted position in the list. Now quick sort gets applied to the left and right parts of the list with respect to the pivot until every element in the list is in the correct sorted position. The worst case performance has a running time algorithm of n^2 and the best case performance has a running time algorithm of nlog(n).

Merge sorts a list by breaking the list of elements in 2 halves, and does this until each sub-list has 1 element in it. Then 2 sub-lists are compared by taking the first element of one list and looping over the other until the loop encounters an element smaller than the first element of the first list, and this element is put at the end of a new list. Once the loop is done, it takes the first element of the first list and is also put at the end of the new list. The same process is repeated if there is a second element in the first list. Now the 2 sub-lists have merged into a sorted list, and merge sort gets applied to any 2 sub-lists until 1 list is left, which will be the complete sorted list. The worst case and best case performances both have a running time algorithm of nlog(n).

 In sorting algorithm terms for computer science, efficiency is what compares all algorithms from being the best sorting algorithm to being the worst sorting algorithm depending on the types of cases, like if a list is sorted already or if a list is partially sorted. When an algorithm is said to be efficient, that means this sorting algorithm uses the least amount of time and effort to sort a given list. Out of the sorting algorithms talked about above, merge sort is the most time efficient no matter what kind of list is being sorted. Regardless of the list being sorted or not, merge sort has to partition all sub-lists into smaller lists which makes it efficient for a very large list. The least time efficient sorting algorithm talked about above is selection sort. Although the best case performance is like quick sort, if selection sort is given a reversed list, it has to look over the whole list in order to find the smallest element as many times as there are elements in the list. Although merge sort is the most time efficient and selection sort is very effort efficient, quick sort is generally the more time and effort efficient since it can be written in few lines of code and can be just as time efficient as merge sort.  Out of the 3 sorting algorithms talked about above, the rank of general efficiency from best to worst is quick sort, merge sort, then selection sort.

Saturday, October 19, 2013

Friday, October 18, 2013

WAIT MY E4A WAS FINE BUT DID THEY CHANGE THE TEST CASES?!?! I JUST CHECKED AND I FAILED 4 TEST CASES

Sunday, October 13, 2013



Object-oriented programming is a style of programming which a programmer can in a way, figuratively create objects. In order for a particular object to be useful, the programmer needs to implement what kind of actions this object can do. The programming term for these actions is called a method. These methods associated with an object is what makes object-oriented programming useful in programming because it completes certain tasks in only a few lines or less in an efficient matter. Also with object-oriented programming, if one were to create an object that has similar methods to another, instead of typing each method out again, one can inherit these methods from some object into their object. This could save a lot of time, and could save a lot of problems if the object being inherited from has already been debugged with no other problems found. If someone decided not to inherit and type each individual method, there would have to be multiple checks to make sure this object has been made perfectly without any bugs.

Recursion is a method of programming that’s used to get to a final solution that comes from solutions of a reoccurring problem. It’s easy to label such a method as recursion since you can find that the recursion method has a base case which produces a single answer, then if the conditions of the base case isn’t satisfied, the method will call itself otherwise in order to get to the base case and produce a final solution. Recursion becomes very useful for problems that tend to be repetitive in its design such as finding the depth of a nested list. If a list in a nested list contains more lists of lists, it can be thought to be a solution with many lines of code, but with recursion can be reduced to only 1 line of code. With this type of method, we could take something that could go on an infinite amount of times given a large example, and turn it into a finite answer, which came from the solutions of smaller problems.  

Tuesday, October 8, 2013

Did as much as I could on that assignment.... dat assignment....

Monday, September 23, 2013