Irreversible means you can't figure out the input from the output. As a simple example, let's say I have two numbers as input, add them together, and get the output 4. Can you figure out the input? No. Could have been (2,2) or (1,3) or something else. Information about the input has been lost.
When it comes to energy cost, it seems no one is mentioning what's at work here. There's something called [Landauer's principle](https://en.wikipedia.org/wiki/Landauer%27s_principle) which states that any irreversible computation must have a minimum energy cost. Frankly a very fascinating result that connects the fundamental laws of physics with information theory. Now, we're very far from that minimum in practice, but it means that no clever engineering could ever make a computer more efficient than that if we continue with irreversible computing.
But reversible computing circumvents that fundamental limit. There's no known physical limit to how efficient we can perform reversible computing, so, as far as we know, it has the potential of being arbitrarily efficient. Fredkin gates is one possibility. Quantum computing is interestingly also reversible (that's not the power of quantum computing, but happens to be one of its properties).
I watched the video from the beginning so I'll try to explain what he's talking about:
First-he's vastly over simplifying and exaggerating the reality. I have a motto which is that the difference between a scientist and an engineer is that a scientist dreams of things in an ideal situation and thinks they have solved the problem whereas an engineer lives in the real world and has to consider that when he does his work. This guy is a scientist.
So, let's start with how data is stored on a computer, he referred to computation and data storage as if they were the same thing so I'm going to address them one at a time because they don't work the same way at all.
Even within the world of storage there are several technologies that work differently.
The basic concept you need to know first is that "gates" can perform logic and be used to store information. Gates are made of transistors which can be printed microscopically small on a silicon chip. Within the computer if something has a voltage it's considered a 1, and if something doesn't, it's considered a 0.
RAM works by storing data on an active circuit, this is done by charging the gates with voltage and this charge does dissipate over time. The computer automatically goes through and re-charges all the charged gates and leaves all the discharged ones the way they are. This happens hundreds of times a second. This process does take energy but it's not much and, more importantly, the idea with ram is that it would be accessed so frequently that the data would change all the time, so, rather than doing something unnecessary and wasting electricity, you are doing something you would have to do anyway. You also want this to go as fast as possible so making the process reversible is of little value.
For the Processor you have tons of gates connected together into different structures that perform different functions like addition, subtraction, division, etc. These are then in turn connected into even more advanced structures within the processor. The key concept within the processor is that little energy ever flows between the individual gates, they are there to process information and pass it on to the next thing down the line. There is no place I can see in this model where you would have decay of stored information in the way he described, the information is always changing at multiple gigahertz.
The reason computers actually use so much power, why they have been getting smaller and more power efficient while also being more powerful, is actually somewhat of a technicality when it comes to the science of it all. It's a prime example of my motto and how the real world can mess with your plans:
This is called the "transmission line" problem. Within the processor the billions of transistors that form the gates which form the complex processing structures are all connected to each other with tiny bits of metal called interconnects, basically wires. If you took a circuit like your home light switch you wouldn't think of the wire as taking up most of the energy. However what happens in a processor is exactly that, when you flip that light switch on and off billions of times a second, on that small timescale the wire acts like a capacitor. Capacitors use energy when they charge or discharge, turning it to heat as a result. Multiply that by the billions of bits of wire within a processor and suddenly you are wasting a TON of energy and generating a ton of heat, which is why modern processors need heat sinks.
The way this problem is getting better is when die sizes get smaller, when everything is smaller on a chip you can fit more stuff within the same space, this also means the interconnects are smaller and so produce less heat. Everyone wins, until we can't make them any smaller.