So we were asked to devise a way to figure out which page a particular physical address was in, as well as it's offset.
Turns out this is a really simple problem, in this case each page is 4kb, which really means 4096 bytes, so simply take the address, divide by that 4096 bytes, and that tells you which page it's in. From there you subtract the number of bytes of the previous pages (4096 * n) from the address and you get the offset.
So the nice thing about virtual memory, is that you basically always know your address will reside within a 4096 byte address. This can make more static stuff easier to handle, rather then trying to guess which physical address you are, then figure out some kind of offset.
Here is the program I wrote in action.
Well I hope you enjoyed, I know it was a bit short, but the problem was really easy to see a solution, if you're having problems understanding how to find a page and offset, please let me know and I'll do my best to help!
Saturday, October 30, 2010
Bankers algorithm
The Algorithm
The bankers algorithm seems to be a pretty simple algorithm, essentially what you do is have some defined amount of resources, and then you kick off a bunch of threads, now if there are resources left, the thread can run and do it's thing, but if there are no resources, the thread must wait.
Deadlock
The key here we need to ensure is that there is no possibility of a deadlock. A deadlock occurs when two processes are waiting on each other to run. This can happen if we lock the mutex to try to see if we have resources, and then we don't let it go if the resource doesn't exist. For example the following situation would produce a deadlock.
1. thread 1 locks the mutex, gets the last resource, and unlocks the mutex.
2. thread 2 locks the mutex, no more resources exist so, thread 2 waits for resources, (not unlocking the mutex).
3. thread 1 is finished doing it's work, and needs to return the resources. It trys to get the mutex, it's blocked until thread 2 is done, but thread 2 won't give up the lock until thread 1 is finished.
This can be avoided by forcing thread 2 to lock the mutex, check for resources, unlock mutex, then check again if needed by doing it all over again. Now when thread 2 locks the mutex, and thread 1 tries to lock it, thread 2 releases it shortly later, and then thread 1 gets it, finally the next time thread 2 comes to lock the mutex, it now has resources and can run. See? Deadlock resolved!
Additional Thoughts
Reading up on the bankers algorithm on wikipedia, it defines that each thread should declare how many system resources it will eventually need, and if the number of resources it needs is more then the system has remaining it must wait until the resources are available, since my implementation only requests one resource at a time, I didn't see the need to declare how many it would use. But my understanding of this is that the thread has itself a max number of instances allowed, so it has a max that it needs to check on. So essentially there are two resources each thread needs to handle, a system and a local, if it exceeds the local, this is an error condition, so it needs to be handled appropriately, if it exceeds the system, again it's an error and needs to be handled appropriately.
The fact is that I had way too many lines running to show it all, so I redirected the output, here is what is printed out.
The bankers algorithm seems to be a pretty simple algorithm, essentially what you do is have some defined amount of resources, and then you kick off a bunch of threads, now if there are resources left, the thread can run and do it's thing, but if there are no resources, the thread must wait.
Deadlock
The key here we need to ensure is that there is no possibility of a deadlock. A deadlock occurs when two processes are waiting on each other to run. This can happen if we lock the mutex to try to see if we have resources, and then we don't let it go if the resource doesn't exist. For example the following situation would produce a deadlock.
1. thread 1 locks the mutex, gets the last resource, and unlocks the mutex.
2. thread 2 locks the mutex, no more resources exist so, thread 2 waits for resources, (not unlocking the mutex).
3. thread 1 is finished doing it's work, and needs to return the resources. It trys to get the mutex, it's blocked until thread 2 is done, but thread 2 won't give up the lock until thread 1 is finished.
This can be avoided by forcing thread 2 to lock the mutex, check for resources, unlock mutex, then check again if needed by doing it all over again. Now when thread 2 locks the mutex, and thread 1 tries to lock it, thread 2 releases it shortly later, and then thread 1 gets it, finally the next time thread 2 comes to lock the mutex, it now has resources and can run. See? Deadlock resolved!
Additional Thoughts
Reading up on the bankers algorithm on wikipedia, it defines that each thread should declare how many system resources it will eventually need, and if the number of resources it needs is more then the system has remaining it must wait until the resources are available, since my implementation only requests one resource at a time, I didn't see the need to declare how many it would use. But my understanding of this is that the thread has itself a max number of instances allowed, so it has a max that it needs to check on. So essentially there are two resources each thread needs to handle, a system and a local, if it exceeds the local, this is an error condition, so it needs to be handled appropriately, if it exceeds the system, again it's an error and needs to be handled appropriately.
Bankers Algorithm Running |
I hope you enjoyed the information! Let me know if you have any questions!Customers are coming..............Customer Created..............Doin Something Time Consuming..............Customer Created..............Doin Something Time Consuming..............Customer Created..............Doin Something Time Consuming..............Customer Created..............Doin Something Time Consuming..............Customer Created..............Doin Something Time Consuming..............Customer Created..............Doin Something Time Consuming..............Customer Created..............Doin Something Time Consuming..............Customer Created..............Doin Something Time Consuming..............Customer Created..............Doin Something Time Consuming..............Customer Created..............Doin Something Time Consuming..............Customer Created..............Customer Created..............Customer Created..............Customer Created..............Customer Created..............Customer Created..............Customer Created..............Customer Created..............Customer Created..............Customer Created..............Waiting until all customers are gone..............Doin Something Time Consuming..............Doin Something Time Consuming..............Doin Something Time Consuming..............Doin Something Time Consuming..............Doin Something Time Consuming..............Doin Something Time Consuming..............Doin Something Time Consuming..............Doin Something Time Consuming..............Doin Something Time Consuming..............Doin Something Time Consuming..............Time to go home..............
Thursday, October 28, 2010
Register Renaming
What is the purpose of Register Renaming?
The goal of register renaming is to allow blocks of instructions to be executed out of order.
Take for example the following instructions
Looking at these you can see that without register renaming you have to run the instructions in the sequence 1,2,3,4,5. However if you note, lines 4 and 5 don't depend on 1,2,3. The issue is that if they run before 1,2,3 they will clober the data that 1,2,3 are trying to calculate with.
If we simply rename the registers r8 and r6 in 4,5 we can remove that dependency.
This type of dependency is called a false dependency, because the only thing preventing them from running out of order is simply you are using the same registers, not the same data.
Now look at this:
1. divd r7, r5, r12
The goal of register renaming is to allow blocks of instructions to be executed out of order.
Take for example the following instructions
1. divd r5, r7, r12
2. addd r6, r7, r8
3. sd r6 0(r1)
4. subd r8, r14, r12
5. muld r6, r15, r8
Looking at these you can see that without register renaming you have to run the instructions in the sequence 1,2,3,4,5. However if you note, lines 4 and 5 don't depend on 1,2,3. The issue is that if they run before 1,2,3 they will clober the data that 1,2,3 are trying to calculate with.
If we simply rename the registers r8 and r6 in 4,5 we can remove that dependency.
This type of dependency is called a false dependency, because the only thing preventing them from running out of order is simply you are using the same registers, not the same data.
Now look at this:
1. divd r7, r5, r12
2. addd r6, r7, r8
3. sd r6 0(r1)
4. subd S, r14, r12
5. muld T, r15,S
Now the instructions 1,2,3 can occur independent of 4,5. Since there are Read after Writes (RAW), and Write after Reads (WAR) the instructions within the 1,2,3, they must occur in order.
for example, the following orders are now valid.
a. 1,4,2,3,5
b. 4,1,2,3,5
c. 1,2,4,5,3
...
The only requirement now is that 1,2,3 occur in succession, and 4,5 occur in succession for example
the following executions are invalid
a. 2,1,4,5,3
b. 3,1,4,2,5
The reason is because the instructions are data dependent on each other "in" those blocks of code.
Any questions? Just go ahead and ask.