Welcome To baselogics.blogspot.in

Welcome to baselogics.blogspot.com

Place to Learn Programmes.

Bits and Bytes....

Computer Scientists Love Their Algorithms and Data Structures

Explore all new Mathematical Puzzles......

Jasper Roberts - Blog

Saturday, August 30, 2014

History of Java Language

                                   The Unmatched programming language that you will see on any software platform in the world 'java'. On 23 MAY 1995, JOHN GAGE, the director of the Science Office of the SUN MICROSYS- TEMS4 along with MARC ANDREESEN, co-founder and executive vice president at NETSCAPE announced to an audience of SunWorldTM that Java technology wasn’t a myth and that it was a realityandthatitwasgoingtobeincorporatedinto NETSCAPE NAVIGATOR.
At the time the total number of people working on Java were less than 30 people.<ref name="CITEREFEarlyYearsSun1"/> This team would shape the future in the next decade and no one had any idea as to what was in store. From being the mind of an unmanned vehicle on MARS to the operating environment on most of the consumer electronics, e.g., cable set-top boxes,VCR’s, toasters, and also for PERSONAL DATA ASSISTANTS (PDAs).11 Java has come a long wayfromitsinception. 

The heuristic method



                      In optimization problemsheuristic algorithms can be used to find a solution close to The Best Possible the optimal solution in cases where finding the optimal solution is impractical. These algorithms work by getting closer and closer to the optimal solution as they progress. In principle, if run for an infinite amount of time, they will find that optimal solution. Their merit is that they can find a solution very close to the optimal solution in a relatively short time. Such algorithms include local searchtabu searchsimulated annealing, andgenetic algorithms. Some of them, like simulated annealing, are non-deterministic algorithms while others, like tabu search, are deterministic. When a bound on the error of the non-optimal solution is known, the algorithm is further categorized as an approximation algorithm.
source:wikipedia

Friday, August 29, 2014

Method 5:Hill Climbing Technique


                  Hill climbing is a technique for certain classes of optimization problems.This Methode is used by newton to find roots. The idea is to start with a sub-optimal solution to a problem (i.e., start at the base of a hill) and then repeatedly improve the solution (walk up the hill) until some condition is maximized (the top of the hill is reached).
Hill-Climbing Methodology
1. Construct a sub-optimal solution that meets the constraints of the problem
2. Take the solution and make an improvement upon it
3. Repeatedly improve the solution until no more improvements are necessary/possible

                One of the most popular hill-climbing problems is the network flow problem. Although network flow may sound somewhat specific it is important because it has high expressive power: for example, many algorithmic problems encountered in practice can actually be considered special cases of network flow. After covering a simple example of the hill-climbing approach for a numerical problem we cover network flow and then present examples of applications of network flow.

Method 4: Greedy Algorithm

                         In the backtracking algorithms we looked at, we saw algorithms that found decision points and recursed over all options from that decision point. A greedy algorithm can be thought of as a backtracking algorithm where at each decision point "the best" option is already known and thus can be picked without having to recurse over any of the alternative options.
                         The name "greedy" comes from the fact that the algorithms make decisions based on a single criterion, instead of a global analysis that would take into account the decision's effect on further steps. As we will see, such a backtracking analysis will be unnecessary in the case of greedy algorithms, so it is not greedy in the sense of causing harm for only short-term gain.
                         Unlike backtracking algorithms greedy algorithm can't be made for every problem. Not every problem is "solvable" using greedy algorithms. Viewing the finding solution to an optimization problem as a hill climbing problem greedy algorithms can be used for only those hills where at every point taking the steepest step would lead to the peak always.
                         Greedy algorithms tend to be very efficient and can be implemented in a relatively straightforward fashion. Many a times in O(n) complexity as there would be a single choice at every point. However, most attempts at creating a correct greedy algorithm fail unless a precise proof of the algorithm's correctness is first demonstrated. When a greedy strategy fails to produce optimal results on all inputs, we instead refer to it as a heuristic instead of an algorithm. Heuristics can be useful when speed is more important than exact results (for example, when "good enough" results are sufficient).

Memoization Methodology for backtracking

Memoization Methodology for backtracking
1. Start with a backtracking algorithm
2. Look up the problem in a table; if there's a valid entry for it, return that value
3. Otherwise, compute the problem recursively, and then store the result in the table before returning the value
                         
                         Consider the solution presented in the backtracking chapter for the Longest Common Subsequence problem. In the execution of that algorithm, many common subproblems were computed repeatedly. As an optimization, we can compute these subproblems once and then store the result to read back later. A recursive memoization algorithm can be turned "bottom-up" into an iterative algorithm that fills in a table of solutions to subproblems. Some of the subproblems solved might not be needed by the end result (and that is where dynamic programming differs from memoization), but dynamic programming can be very efficient because the iterative version can better use the cache and have less call overhead. Asymptotically, dynamic programming and memoization have the same complexity.

Method 3:Dynamic Programming

                   Dynamic Programming is another Method to develope new algorithm.  Dynamic programming can be thought of as an optimization technique for particular classes of backtracking algorithms where sub problems are repeatedly solved. Note that the term dynamic in dynamic programming should not be confused with dynamic programming languages, like Scheme or Lisp. Nor should the term programming be confused with the act of writing computer programs. In the context of algorithms, dynamic programming always refers to the technique of filling in a table with values computed from other table values. (It's dynamic because the values in the table are filled in by the algorithm based on other values of the table, and it's programming in the sense of setting things in a table, like how television programming is concerned with when to broadcast what shows.)

Methode 2: Backtracking


Backtracking is a general algorithmic technique that considers searching every possible combination in order to solve an optimization problem. Backtracking is also known as depth-first search or branch and bound. By inserting more knowledge of the problem, the search tree can be pruned to avoid considering cases that don't look promising. While backtracking is useful for hard problems to which we do not know more efficient solutions, it is a poor solution for the everyday problems that other techniques are much better at solving.However, dynamic programming and greedy algorithms can be thought of as optimizations to backtracking, so the general technique behind backtracking is useful for understanding these more advanced concepts. Learning and understanding backtracking techniques first provides a good stepping stone to these more advanced techniques because you won't have to learn several new concepts all at once.
Backtracking Methodology
1. View picking a solution as a sequence of choices
2. For each choice, consider every option recursively
3. Return the best solution found
                                         This methodology is generic enough that it can be applied to most problems. However, even when taking care to improve a backtracking algorithm, it will probably still take exponential time rather than polynomial time. Additionally, exact time analysis of backtracking algorithms can be extremely difficult: instead, simpler upperbounds that may not be tight are given.

Method 1: Inventing an algorithm.

The first major algorithmic technique is divide and conquer.
This is the technique that britishers used to enter india.
Part of the trick of making a good divide and conquer algorithm is determining how a given problem could be separated into two or more similar, but smaller, subproblems. More generally, when we are creating a divide and conquer algorithm we will take the following steps:

Divide and Conquer Methodology

1. Given a problem, identify a small number of significantly smaller subproblems of the same type 2. Solve each subproblem recursively (the smallest possible size of a subproblem is a base-case) 3. Combine these solutions into a solution for the main problem
The first algorithm we'll present using this methodology is the merge sort.
Merge Sort The problem that merge sort solves is general sorting: given an unordered array of elements that have a total ordering, create an array that has the same elements sorted. More precisely, for an array a with indexes 1 through n, if the condition
for all i, j such that 1 ≤ i < j ≤ n then a[i] ≤ a[j]
holds, then a is said to be sorted. Here is the interface:
// sort -- returns a sorted copy of array a function sort(array a): array Following the divide and conquer methodology, how can a be broken up into smaller subproblems? Because a is an array of n elements, we might want to start by breaking the array into two arrays of size n/2 elements. These smaller arrays will also be unsorted and it is meaningful to sort these smaller problems; thus we can consider these smaller arrays "similar". Ignoring the base case for a moment, this reduces the problem into a different one: Given two sorted arrays, how can they be combined to form a single sorted array that contains all the elements of both given arrays:
// merge -- given a and b (assumed to be sorted) returns a merged array that // preserves order function merge(array a, array b): array So far, following the methodology has lead us to this point, but what about the base case? The base case is the part of the algorithm concerned with what happens when the problem cannot be broken into smaller subproblems. Here, the base case is when the array only has one element. The following is a sorting algorithm that faithfully sorts arrays of only zero or one elements.

Is Really Efficiency Required For Every Programme ?

When Efficiency is Important?
                                    The Major Question in front of quite satisfactory Programmer.......

 Not every problem requires the most efficient solution available. For our purposes, the term efficient is associated with the time or space needed to perform the task. When either time or space is abundant and cheap, it may not be worth it to pay a programmer to spend a day or so working to make a program faster.
However, here are some cases where efficiency matters:


• When resources are limited, a change in algorithms could create great savings and allow limited machines (like cell phones, embedded systems, and sensor networks) to be stretched to the frontier of possibility.


• When the data is large a more efficient solution can mean the difference between a task finishing in two days versus two weeks. Examples include physics, genetics, web searches, massive online stores, and network traffic analysis.


• Real time applications: the term "real time applications" actually refers to computations that give time guarantees, versus meaning "fast." However, the quality can be increased further by choosing the appropriate algorithm.


• Computationally expensive jobs, like fluid dynamics, partial differential equations, VLSI design, and cryptanalysis can sometimes only be considered when the solution is found efficiently enough.


• When a subroutine is common and frequently used, time spent on a more efficient implementation can result in benefits for every application that uses the subroutine. Examples include sorting, searching, pseudorandom number generation, kernel operations, database queries, and graphics.
In short, it's important to save time when you do not have any time to spare.
When is efficiency unimportant? Examples of these cases include prototypes that are used only a few times, cases where the input is small, when simplicity and ease of maintenance is more important, when the area concerned is not the bottle neck, or when there's another process or area in the code that would benefit far more from efficient design and attention to the algorithm(s).

Understanding an Algorithm

Understanding an Algorithm


The Best Video that Decribes the Concept Algorithm Very good By  David J. Malan
In TED-Ed Event. 



source: youtube.com  publisher: TED-Ed

Sunday, August 24, 2014

Feedback About D3 Labs Workshop @IIT KGP by Prameel Arjun ,CEO of D3 labs.....

The Seminar Is good and really Benifited to me as I used blogs but i dont know how to use adsense and make most of my blogging. Based Upon the 1st day Workshop I will rate them as 4.5 out of 5.0. Also the Hacking Part is nicely introduced with some tricks.

Algorithms

The Good Algorithm Ensures Certain Things which enable it to be concise.
The Crucial Things to be Kept in Mind While thinking an Algorithm are as follows : 
  the divide and conquer technique                                                                                                                                        the use of randomization in algorithms                                                                                                                                the general, but typically inefficient, backtracking technique;                                                                                                dynamic programming as an efficient optimization for some backtracking algorithms;                                     greedy algorithms as an optimization of other kinds of backtracking algorithms;                                                     hill-climbing techniques, including network flow.

The Basics of algorithm, Data Structure video-
source:Youtube

• Divide and Conquer: Many problems, particularly when the input is given in an array, can be solved by cutting the problem into smaller pieces , solving the smaller parts recursively , and then combining the solutions into a single result.
• Randomization: Increasingly, randomization techniques are important for many applications. This chapter presents some classical algorithms that make use of random numbers.      
• Backtracking: Almost any problem can be cast in some form as a backtracking algorithm. In backtracking, you consider all possible choices to solve a problem and recursively solve subproblems under the assumption that the choice is taken. The set of recursive calls generates a tree in which each set of choices in the tree is considered consecutively. Consequently, if a solution exists, it will eventually be found. 
Backtracking is generally an inefficient, brute-force technique, but there are optimizations that can be performed to reduce both the depth of the tree and the number of branches. The technique is called backtracking because after one leaf of the tree is visited, the algorithm will go back up the call stack (undoing choices that didn't lead to success), and then proceed down some other branch. To be solved with backtracking techniques, a problem needs to have some form of "self-similarity," that is, smaller instances of the problem (after a choice has been made) must resemble the original problem. Usually, problems can be generalized to become self- similar. 
• Dynamic Programming: Dynamic programming is an optimization technique for backtracking algorithms. When subproblems need to be solved repeatedly (i.e., when there are many duplicate branches in the backtracking algorithm) time can be saved by solving all of the subproblems first (bottom-up, from smallest to largest) and storing the solution to each subproblem in a table. Thus, each subproblem is only visited and solved once instead of repeatedly. The "programming" in this technique's name comes from programming in the sense of writing things down in a table; for example, television programming is making a table of what shows will be broadcast when. 
• Greedy Algorithms: A greedy algorithm can be useful when enough information is known about possible choices that "the best" choice can be determined without considering all possible choices. Typically, greedy algorithms are not challenging to write, but they are difficult to prove correct. 
• Hill Climbing: The final technique we explore is hill climbing. The basic idea is to start with a poor solution to a problem, and then repeatedly apply optimizations to that solution until it becomes optimal or meets some other requirement. An important case of hill climbing is network flow. Despite the name, network flow is useful for many problems that describe relationships, so it's not just for computer networks. Many matching problems can be solved using network flow. 






About me

 My Details
     Name- Amit Sanjay Patil                                                                                                                     
     e-mail address- apapapap44@gmail.com                                                                                           
     Student @ IIT Kharagpur  ( Mechanical Department)                                                                          
     Hometown - Jalgaon District Maharashtra                                                                                            
                         


                                        I am Interested in Developing smarter Algorithms for Computer  Programmes. I think its algorithm only which decides size of the programme completely. Algorithms are like "The More you look more you see", there is always  an alternative to the any task,the best possible one is the perfect algorithm.