The Science of Time Management

The Science of Time ManagementHow to optimize your time with the help of Computer ScienceSamuel FlenderBlockedUnblockFollowFollowingFeb 20Everyone knows how it feels like to procrastinate.

Despite its bad reputation though, procrastination is in fact an optimized algorithm.

When you always do the quickest task available first, you are in fact following the so-called Shortest Processing Time algorithm.

This approach is optimal in that it gets the most tasks done as quickly as possible.

The problem though is that procrastination is ignoring the weight, or priority, of individual tasks.

Photo credit: Aron Visuals, UnsplashUnfortunately, some of the technology that we use today particularly encourages procrastination.

Take app badges for example.

The small numbers hovering on top of the apps on your phone or laptop screen merely show the number of tasks (or messages) remaining, but not their importance.

A promising cure for procrastination is thus to focus on the important tasks first, and deal with the less important tasks later.

This works, as long as there are no dependencies of high-priority tasks on low-priority tasks.

If such dependencies do exists, and are not properly accounted for, then things can go bad: one infamous example is the Mars Pathfinder, which, instead of collecting data and samples from the surface of Mars, was procrastinating by performing a series of system resets, because a low-priority task was blocking an important system resource needed for the high-priority tasks.

Context switch: the cost of multitaskingAnother complication in time management is randomness: once you start work on an important task, other tasks might come up randomly, and you have to evaluate whether any of those are important and urgent enough to interrupt your current work.

Furthermore, it is important to keep in mind that there is always a cost associated with switching tasks, which is well known in Computer Science as a context switch.

When a processor switches tasks, it needs to copy the current state of the task into memory, retrieve the most recent state of the other task from memory, find the position where it left that process previously, and continue from there.

All this time the processor is not really doing any work — it is doing metawork.

Photo credit: Alexandre Debiève, UnsplashAs Brian Christian and Tom Griffiths point out in Algorithms to Live By, we humans clearly experience context switching costs, too.

Starting work on a research problem, a section of an article, a chapter of a book, or a piece of code all require “loading” the most recent state of that task into your memory (“where was I?”).

Once you have to interrupt that work for something else, you lose some of that attention and thus experience context switching cost.

Thrashing: multitasking gone wrongIn the extreme case, a processor that is handling too many different tasks at the same time can end up in a state of thrashing.

This means that the processor is constantly switching tasks, and not getting any real work done.

When your operating system suddenly freezes, thrashing is a likely cause.

For processors, there are two design choices that can be made to avoid the risk of thrashing: one design choice is to limit the minimum time that a processor is allowed to spend on any given task.

For instance, in the Linux operating system the default minimum time slice is 100 milliseconds (and you can look that up in the Linux source code).

Another design choice is for the processor to simply refuse taking on new tasks when it is already running at maximum capacity.

The take-away lesson from this to our own lives is this: Don’t ignore context switching cost.

Make the “time slices” for working on the important tasks as long as possible.

In the worst case, if you’re constantly switching, you will not be able to get any real work done, just as a thrashing computer.

Photo credit: Juan Pablo Rodriguez, UnsplashEnter interrupt coalescingUltimately, dealing with tasks in real time is a trade-off between responsiveness and throughput: if you’re always immediately responsive, your throughput will suffer.

On the other hand, if you’re focusing on a single task while ignoring all interruptions, your responsiveness will suffer.

As Christian and Griffiths put it,“The moral is that you should try to stay on a single task as long as possible without decreasing your responsiveness below a minimum acceptable limit.

Decide how responsive you need to be — and then, if you want to get things done, be no more responsive than that.

” (Brian Christian and Tom Griffiths, in Algorithms to Live By)One powerful tool in the trade-off between responsiveness and throughput is what is known in Computer Science as interrupt coalescing.

The idea is this: while working on a task, you ignore all interruptions and accumulate (coalesce) them.

Then, after some predetermined time, you take care of all the interruptions at once.

Here are some examples of successful interrupt coalescing:Instead of paying your monthly bills one-by-one as you receive them in the mail, pay them all together on a fixed day.

Instead of reading every Email immediately as it comes in, check your Emails only once every few hours (or even once per day), and then take care of everything that came up.

Adjust the time period to the degree of how responsive you want to be.

Instead of having your phone alert you immediately of any incoming message or social media event, mute or deactivate the notifications, and only check your phone every few hours or so.

Polarize your timeGoogle’s site reliability engineers devote an entire chapter to who to deal with interrupts at work in their book Site Reliability Engineering: How Google runs Production Systems.

They encourage polarizing your time, meaning that when you come into work each day, you should know whether you’re doing just project work or just interrupts.

Having some (rotating) team members take care of interrupts enables others to focus on project work.

This is again interrupt coalescing, but not at an individual level, but at the level of an entire team.

Similar to Christian and Griffiths, the authors warn about the costs of context switching, and advise:“Assign a cost to context switches.

A 20-minute interruption while working on a project entails two context switches; realistically, this interruption results in a loss of a couple of hours of truly productive work.

” (Dave O’Connor, in Site Reliability Engineering: How Google runs Production Systems)Photo credit: Austin Neill, UnsplashConclusion: minimize interruptions, find flowOne of the important lessons from Computer Science is to be carefully aware of the costs of context switching: stay focused on a single task as long as possible.

A powerful tool that you can use is interrupt coalescing: do not respond to all interruptions right away.

Psychologist Mihaly Csikszentmihalyi, in his book Flow: The Psychology of Optimal Experience, argues that once you immerse yourself in a single task, and the challenge of that task matches your skill level, then you get into a state of mind that he calls the Flow experience (also known as “being in the zone”).

When in Flow, you lose your sense of time, your sense of self, your surroundings, and you feel fully involved in the continuing process of your work.

Flow is something we should strive for every day.

Flow is the absence of context switching.

.

. More details

Leave a Reply