Array rotation by k places can be done by using extra space of k sized array, by shifting array by one position in loop of k, by shifting array by gcd(array.length, k) in gcd number of sets. Following program has all the implementations:
Monday, 24 August 2020
Saturday, 1 August 2020
Sort By Frequency - Hashing Problem
Problem : Sort elements by their frequency.
Following is the implementation where we store the frequency in hashmap:
Count Distinct Elements In Every Window - Hashing Problem
Problem: Count distinct elements in every window of given K size.
Following is the implementation, which rotates the array window:
Print Non-repeated Elements - Hashing Problems
Problem : Print the non-repeated elements of an array.
Following is the implementation:
Relative Sorting - Hashing Problems
Problem : Sort array1 in the order of array2 elements.
Following is the implementation:
Zero Sum Sub-arrays Count - Hashing Problems
Problem : Find the number of sub-arrays whose sum evaluates to zero.
Following is the implementation:
Largest Sub-array With Sum Zero - Hashing Problem
Problem : Find the maximum length of the largest sub-array with sum zero.
Following is the implementation, where we store the sum at every index in hashmap and look for sum already being present in hashmap(only possible if sum of the sub-array is zero):
Four Sum - Hashing Problems
Problem : Find four elements in array whose sum equals to x.
Following is the implementation using hashmap:
Two Sum - Hashing Problem
Problem : Find two elements in an array whose sum is x.
Following is the implementation:
Longest Consequent Sub-sequence - Hashing Problem
To find longest consequent sub-sequence in an array, following is the implementation using hashmap:
Maximum Distance Between Two Occurences Of Same Element - Hashing Problem
To find the maximum distance between two occurrences of same element, we can store the elements in hashmap and compare max index distance. Following is the implementation:
Check If Two Arrays Are Equal - Hashing Problems
To check if two arrays are equals, lengths and elements along with their frequency need to be considered. For this, a hashmap can be used. Following is the implementation:
Thursday, 30 July 2020
Linear Probing - Coursera Algorithms
Linear probing is used to resolve collisions in hashing. It takes less space and has better cache performance.
An array of size M > number of elements N is used such that:
1. Hash - map key to i between 0 and M-1.
2. Insert - at table index i, if not free - try i+1, i+2 etc
An array of size M > number of elements N is used such that:
1. Hash - map key to i between 0 and M-1.
2. Insert - at table index i, if not free - try i+1, i+2 etc
3. Search - search table index i, if occupied and no match - try i+1, i+2 etc
Following is the implementation :
Separate Chaining - Coursera Algorithms
To deal with collisions in hashing, separate chaining implementation can be used, where M linked lists(<number of elements N) are used such that :
1. Hash - map key to i between 0 and M-1.
2. Insert - at the front of ith chain.
Following is the implementation:
1. Hash - map key to i between 0 and M-1.
2. Insert - at the front of ith chain.
3. Search - in ith chain.
Easier to implement and clustering less sensitive to poorly designed hash function.
Following is the implementation:
Hash Functions - Coursera Algorithms
We can do better than the sequential/binary search or search through BST/red-black BST, by storing the data in key-indexed table. Hash functions compute the array index from key. We need to handle the hash implementation, equality test and collision resolution when two keys hash to the same index.
There are several hashing implementations, some of them being :
1. 31*initial_hash_value+ current element value or current object hashcode.
2. Modular - Math.abs(key.hashCode() & 0x7fffffff)%M returning a hash between 0 and M-1
Following code experiments with what happens when we override equals/hashcode:
There are several hashing implementations, some of them being :
1. 31*initial_hash_value+ current element value or current object hashcode.
2. Modular - Math.abs(key.hashCode() & 0x7fffffff)%M returning a hash between 0 and M-1
Following code experiments with what happens when we override equals/hashcode:
Monday, 6 July 2020
Heap Sort - Coursera Algorithms
Heap Sort is an in place sort that builds a max binary heap and then repeatedly deletes the maximum element. It's not stable and it's inner loop takes longer than Quick Sort. It has poor cache usage. To build a heap, it takes O(N) and for sort, it takes O(NlogN).
Following is the program:
Following is the program:
Related Posts:
Binary HeapBinary Heap - Coursera Algorithms
A binary tree is either empty or points to a root node with links to left and right binary trees. A tree is complete if it is balanced perfectly, excluding the bottom level. A binary heap is an array representation of a heap ordered complete binary tree.
Max node is root at a[1]. For a node k, it's parent is at k/2 and it's children are at 2k and 2k+1.
Swim: When a child node is greater than parent, keep exchanging them till they are ordered.
Sink: When a parent node is lesser than child, keep exchanging them till they are ordered.
In insert operation, add the child at end and swim it. --> O(logN)
In delete operation, exchange root with node at end and sink the top node. -->O(logN)
Following is the program:
Max node is root at a[1]. For a node k, it's parent is at k/2 and it's children are at 2k and 2k+1.
Swim: When a child node is greater than parent, keep exchanging them till they are ordered.
Sink: When a parent node is lesser than child, keep exchanging them till they are ordered.
In insert operation, add the child at end and swim it. --> O(logN)
In delete operation, exchange root with node at end and sink the top node. -->O(logN)
Following is the program:
Maximum Priority Queue - Coursera Algorithms
Max priority queues pick the maximum element for deletion instead of picking the first in element as done in Queues. It takes O(N) for extracting the max. Following is the program:
Three Way Quick Sort - Coursera Algorithms
Three Way Quick Sort uses multiple partition elements, such that the there are no larger entries to their left and no smaller entries to their right. It is in place, but not a stable sort. This is used to deal with duplicate elements.
Time complexity: O(NlgN)
Following is the program, partition elements are from lt to gt:
Quick Sort
Insertion Sort
Time complexity: O(NlgN)
Following is the program, partition elements are from lt to gt:
Related Posts:
Quick Sort
Insertion Sort
Wednesday, 24 June 2020
Quick Select - Coursera Algorithms
Quick select finds kth smallest element by partitioning. It takes linear time and O(N2) in worst case.
Following is the program:
Following is the program:
Quick Sort - Coursera Algorithms
Quick Sort uses a partition element, such that the there are no larger entries to it's left and no smaller entries to it's right. It is in place, but not a stable sort.
Time complexity: O(NlgN)
Worst case : O(N2)
Following is the program:
Time complexity: O(NlgN)
Worst case : O(N2)
Following is the program:
Monday, 22 June 2020
Merge Sort - Coursera Algorithms
Merge sort uses divide and conquer approach to sort the elements. It takes at most NlgN compares and 6NlgN array accesses with total running time of NlgN.
Following is an optimized merge sort - it uses insertion sort for less number of elements. Please see Insertion Sort for the insertion sort algorithm. It also has a method for bottom up merge sort which is not as efficient as the recursive merge sort.
Following is an optimized merge sort - it uses insertion sort for less number of elements. Please see Insertion Sort for the insertion sort algorithm. It also has a method for bottom up merge sort which is not as efficient as the recursive merge sort.
Related Posts:
Insertion SortFriday, 19 June 2020
Shell Sort - Coursera Algorithms
Shell sort moves entries by more than one position at a time. It is basically insertion sort with a stride length h - it considers h interleaved sequences and sorts them using insertion sort. When the increment(h) is big, the sub-arrays are smaller and when the increment gets shorter in subsequent iterations, the array is already partially sorted. In both cases, insertion sort works great and hence, it's the best option here.
The optimal value for h is provided by Knuth as 3X+1, but no accurate model has been found for this algorithm yet.
It takes O(N3/2) compares in worst case. It's fast unless array size is large and it has less footprint for code. These make it a perfect choice for hardware sort prototype, linux/kernel, embedded systems etc.
Following is the program:
The optimal value for h is provided by Knuth as 3X+1, but no accurate model has been found for this algorithm yet.
It takes O(N3/2) compares in worst case. It's fast unless array size is large and it has less footprint for code. These make it a perfect choice for hardware sort prototype, linux/kernel, embedded systems etc.
Following is the program:
Related Posts:
Insertion Sort - Coursera Algorithms
Insertion Sort swaps an element with a larger entry to it's left.
It take ~1/4 N2 compares and ~1/4 N2 exchanges.
Best case(already sorted) : N-1 compares and zero exchanges.
Worst case(reverse sorted): ~1/2 N2 compares and ~1/2 N2 exchanges.
Partially sorted(when number of inversions <= cN): linear time as number of exchanges equal the number of inversions.
Following is the program, it also uses shuffle to make the order as random as possible. This is done by swapping each element with a random index r such that r is between 0 and iteration i. This takes linear time.
It take ~1/4 N2 compares and ~1/4 N2 exchanges.
Best case(already sorted) : N-1 compares and zero exchanges.
Worst case(reverse sorted): ~1/2 N2 compares and ~1/2 N2 exchanges.
Partially sorted(when number of inversions <= cN): linear time as number of exchanges equal the number of inversions.
Following is the program, it also uses shuffle to make the order as random as possible. This is done by swapping each element with a random index r such that r is between 0 and iteration i. This takes linear time.
Selection Sort - Coursera Algorithms
Selection sort finds the minimum element and swaps it with the first element, till the complete array is sorted. It takes N2/2 compares and N exchanges.
Following is the program:
Following is the program:
Thursday, 18 June 2020
Decimal To Binary Conversion Using Stack
Stack can be used to convert a number into it's binary form.
- Push the remainder into stack after dividing the number by 2.
- Keep doing 1, till number>0.
- Pop the binary remainders 1/0 to get the binary form.
Arithmetic Expression Evaluation Using Stack
Arithmetic expressions can be evaluated using two stacks - one for operands and other for numeric values.
Algorithm:
- If element is numeric, push to numeric stack.
- If element is operand, push to operand stack.
- If element is "(", ignore.
- If element is ")", pop operand from operand stack and pop two numerics from numeric stack - perform the operation and push the result back into numeric stack.
Wednesday, 17 June 2020
Generic Queue Using Resized Array And Iterator - Coursera Algorithms
To make the stack generic and to hide the internal representation of a queue from client, we can use Generics and Iterator.
Following is the program:
Following is the program:
Generic Queue Using Linked List And Iterator - Coursera Algorithms
To make the stack generic and to hide the internal representation of a queue from client, we can use Generics and Iterator.
Following is the program:
Following is the program:
Generic Stack Using Resized Array And Iterator - Coursera Algorithms
To make the stack generic and to hide the internal representation of a stack from client, we can use Generics and Iterator.
Following is the program:
Following is the program:
Generic Stack Using Linked List And Iterator - Coursera Algorithms
To make the stack generic and to hide the internal representation of a stack from client, we can use Generics and Iterator.
Following is the program:
Following is the program:
Queue Using Array Resizing - Coursera Algorithms
Queues follow FIFO standard. To implement, we need to maintain two indices to first and last elements of the array, for enqueue and dequeue operations.
Following is the program:
Following is the program:
Related Posts:
Queue Using Linked List - Coursera Algorithms
Queues follow FIFO standard. To implement, we need to maintain two pointers to first and last nodes of linked list, as we insert at the end of the list and removed the first element of the list for enqueue and dequeue operations.
Following is the program:
Following is the program:
Tuesday, 16 June 2020
Stack Using Array Resizing - Coursera Algorithms
When implementing stacks using arrays, we may run into overflow and underflow problems. We can resize the array on need basis by doubling the array in push op when the array has reached it's limit and by halving the array in pop op when the array is just 25% utilized.
This will result in constant amortized time operations, but ~N in worst case.
Linked list on the other hand has constant time even in worst case. But still, linked list may take more time and memory because of the links.
If occasional slow ops are bearable, then arrays would be a better option than linked lists.
Following is the complete program:
Related Posts:
Monday, 15 June 2020
Stack Using Array - Coursera Algorithms
Stack works in a LIFO order. It supports insert, remove, iteration and test for empty operations.
They can be represented using array. An initial capacity for this array is taken as input. We manipulate an index variable for push and pop, instead of actually deleting items from stack.
If using objects in stack, avoid loitering(holding references to objects and preventing GC) by using following code in pop operation:
public String pop() { String item = s[--N]; s[N] = null; return item; }
Following is the complete program:
They can be represented using array. An initial capacity for this array is taken as input. We manipulate an index variable for push and pop, instead of actually deleting items from stack.
If using objects in stack, avoid loitering(holding references to objects and preventing GC) by using following code in pop operation:
public String pop() { String item = s[--N]; s[N] = null; return item; }
Following is the complete program:
Related Posts:
Stack Using Linked List - Coursera Algorithms
Stack works in a LIFO order. It supports insert, remove, iteration and test for empty operations.
They can be represented using linked list. Push and pop operations are done on the first element. This takes constant time for any operation and about ~36N Bytes for N stack items - 16B overhead + 8B inner Node class + 8B reference to Node object + 4B for integer values.
Following is the program:
They can be represented using linked list. Push and pop operations are done on the first element. This takes constant time for any operation and about ~36N Bytes for N stack items - 16B overhead + 8B inner Node class + 8B reference to Node object + 4B for integer values.
Following is the program:
Monday, 25 May 2020
Finding maximum and minimum in union find algorithm
This post is a problem based on union find algorithm. For the algorithm, please see "Weighted Quick Union With Path Compression - Coursera Algorithms"
Problem: Given an element, find the max/min of it's connected components tree.
Following is my, probably brute force, solution to find max/min. For this, we use extra arrays to store the max and min values. We updated the max and min only for root nodes. For all other nodes, just find the root and it's max/min.
Problem: Given an element, find the max/min of it's connected components tree.
Following is my, probably brute force, solution to find max/min. For this, we use extra arrays to store the max and min values. We updated the max and min only for root nodes. For all other nodes, just find the root and it's max/min.
Sunday, 24 May 2020
Weighted Quick Union With Path Compression - Coursera Algorithms
Weighted Quick Union can be improved further by path compression. The idea here is to avoid flatten the tree. After computing the root of a node,
we set the id of each examined node to point to that root.
Running Time:
For M union-find operations on N elements, running time ≤ c ( N + M lg* N ) array accesses.
Running Time:
For M union-find operations on N elements, running time ≤ c ( N + M lg* N ) array accesses.
lg*N = number of times we have to take log of N to get 1.
It's almost linear and union find doesn't have any linear algorithm.
Following is the program:
Following is the program:
Weighted Quick Union Algorithm - Coursera Algorithms
Quick Find
and Quick Union were slow in performing union and finding if two elements are connected respectively. However, Quick Union can be improvised by using weighted trees. The idea here is to avoid tall trees of Quick Union and always balance the trees such that smaller trees always link up to the larger trees.
We use array to initialize elements to indices itself. To weigh trees, we use an extra array to track the size of trees at each element.
Any node x's depth increases by one when it's tree T1 is merged into another T2.
As |T2|>=|T1| , the size of tree containing x at least doubles.
As N*lg N = N, the size of tree containing x can double at most lg N times.
So the depth of any node at x is at most lg N.
Running Time:
initialize - N
union(p,q) - lg N+ - including the root calculations
connected(p,q) - lg N - proportional to depth of p and q
Following is the program:
We use array to initialize elements to indices itself. To weigh trees, we use an extra array to track the size of trees at each element.
Any node x's depth increases by one when it's tree T1 is merged into another T2.
As |T2|>=|T1| , the size of tree containing x at least doubles.
As N*lg N = N, the size of tree containing x can double at most lg N times.
So the depth of any node at x is at most lg N.
Running Time:
initialize - N
union(p,q) - lg N+ - including the root calculations
connected(p,q) - lg N - proportional to depth of p and q
Following is the program:
Saturday, 23 May 2020
Quick Union Algorithm - Coursera Algorithms
Quick Find was very slow with quadratic operations. Quick Union is a lazy union-find algorithm to unite sets and to see if an element is connected to another. We use array to initialize elements to indices itself. Later, we update the elements to point to it's parent. This way we create trees out of elements.
Array accesses:
initialize - N
union(p,q) - N+ including the cost of root calculations
connected(p,q) -N
The trees can get quite tall, making the connected operation quite slow.
Following is the program:
Array accesses:
initialize - N
union(p,q) - N+ including the cost of root calculations
connected(p,q) -N
The trees can get quite tall, making the connected operation quite slow.
Following is the program:
Quick Find Algorithm - Coursera Algorithms
Quick Find is an eager union-find algorithm.
Purpose: To find if two elements are connected and to perform union operation on two elements.
We will consider the elements as simple int array. We initialize array to index elements itself and then change the ids in the array when an union operation is performed.
Array accesses: 2N+1
initialize - N
union(p,q) - N
connected(p,q) - 1
If N unions are performed on N elements, union takes N2 array access. Quadratic operations don't scale.
Following is the program:
Purpose: To find if two elements are connected and to perform union operation on two elements.
We will consider the elements as simple int array. We initialize array to index elements itself and then change the ids in the array when an union operation is performed.
Array accesses: 2N+1
initialize - N
union(p,q) - N
connected(p,q) - 1
If N unions are performed on N elements, union takes N2 array access. Quadratic operations don't scale.
Following is the program:
Subscribe to:
Posts (Atom)