Health
Article

Dopamine, Explained

Dopamine doesn't make you feel good — it makes you want. The complete guide to the prediction-error neurotransmitter, motivation, addiction, and why the "dopamine detox" is both right and wrong.

22 min read

The popular picture of dopamine — the "pleasure molecule" that spikes when you eat chocolate or check Instagram — is wrong. Not metaphorically wrong, literally wrong. Dopamine does not make you feel good. It makes you want. Those are two different signals carried by two partly different circuits, and mixing them up is why so much of the advice about focus, addiction, motivation, and "dopamine detoxes" is either confused or actively counterproductive.

What dopamine actually encodes, in the mesolimbic system at least, is reward prediction error — the gap between what your brain expected and what it got. That is the computational heart of it, and once you understand that single idea, the rest of the dopamine story falls into place: why scrolling feels compulsive but not pleasurable, why Parkinson's patients lose movement and motivation simultaneously, why stimulants fix attention, why the "dopamine detox" works for the wrong reason, and why the only interventions that reliably rebuild drive are the boring ones.

This article is long because the topic is actually complicated and because the listicle version is how bad advice propagates. Skip around with the headings below.

Dopamine is not the pleasure molecule

Reward prediction error, explained

In the 1990s, Wolfram Schultz and colleagues recorded from dopamine neurons in the ventral tegmental area of monkeys while the animals learned to expect a drop of juice after a light cue. The result became one of the most replicated findings in systems neuroscience: dopamine neurons fired when juice arrived unexpectedly, stopped firing to juice after the monkey learned the cue predicted it, and started firing to the cue itself. If the cue appeared but no juice followed, firing dropped below baseline at the expected time.

That pattern is not a "pleasure response." It is a computation: the difference between what the animal expected and what it got. Positive prediction errors (better than expected) increase firing. Negative prediction errors (worse than expected) decrease it. Perfectly-predicted outcomes produce nothing. Dopamine is an error signal, not a reward signal.

This matters because it explains behaviors that the pleasure-molecule framing cannot. A cocaine user grinding through joyless use is not getting pleasure; he is getting prediction errors in a drug that hijacks the signal. A gambler losing money keeps playing because the variable-reinforcement schedule generates massive prediction errors even on losing trials. An Instagram user scrolling past sixty posts they didn't actually enjoy is the same behavior: the next post might be the one, and the uncertainty itself is what keeps the dopamine system engaged.

Wanting vs. liking (Berridge et al.)

Kent Berridge at Michigan has spent three decades demonstrating that the brain runs "wanting" and "liking" as separate systems. Wanting is dopaminergic, mesolimbic, and drives approach behavior. Liking is opioid- and endocannabinoid-mediated, concentrated in small hedonic hotspots in the nucleus accumbens shell and ventral pallidum, and produces the actual pleasurable experience of a reward.

The experimental evidence is clean. Rats with their dopamine systems destroyed still show normal "liking" facial reactions to sweet tastes — they still experience pleasure — but they won't work to get the sucrose. They'll die in front of a food source unless hand-fed, not because they're depressed but because the wanting signal is gone. Conversely, you can drive wanting pharmacologically without producing any liking response.

In humans, the separation shows up clinically. Addicts describe a late stage where they crave the drug intensely but barely enjoy it. Depressed patients with anhedonia often still show normal reward prediction responses and intact wanting — they pursue goals but report no pleasure in the payoff. Parkinson's patients lose wanting first, and the apathy can predate the motor symptoms by years.

Conflating these two produces predictable mistakes. "Raising dopamine to feel better" is almost always wrong — it raises pursuit and craving, not enjoyment. "Dopamine fasting to restore pleasure" is wrong in the other direction: lowering dopamine tone reduces motivation, not hedonic response.

Why sugar and TikTok hit the same system

Evolutionary logic: the brain evolved prediction-error circuitry to learn which foods, mates, and territories were worth pursuing. In an ancestral environment, novelty was rare, rewards were often delayed, and the system's gain was calibrated to that. Modern environments violate every assumption.

Refined sugar delivers a larger-and-faster-than-possible-in-nature caloric signal. A TikTok feed delivers variable-ratio reinforcement across unlimited novel stimuli at zero cost. Slot machines were engineered by behaviorists. Pornography, short-form video, mobile games, and push notifications all exploit the same basic fact: if you can produce large, unpredictable reward-prediction errors on demand, you can drive compulsive engagement without producing much actual pleasure.

This is not a moral claim. It is an engineering one. The people who built these products did not "hack dopamine" metaphorically — they hired behavioral psychologists who literally optimized for the patterns the mesolimbic system responds to most strongly. That is also why the apps do not feel good. They feel compulsive, which is what wanting without liking feels like.

The dopamine system, mechanically

The four major pathways

Dopamine is made by a small number of neurons — around 400,000 in a human brain out of roughly 86 billion total — but those neurons project widely and organize into four major pathways that do different things.

Mesolimbic pathway. VTA to nucleus accumbens and other limbic targets. This is the reward and motivation system, the one the rest of this article spends most of its time on.

Mesocortical pathway. VTA to prefrontal cortex. Involved in cognition, working memory, attention, and executive function. Disruption of this pathway is central to the cognitive symptoms of schizophrenia and to the attention deficits in ADHD.

Nigrostriatal pathway. Substantia nigra pars compacta to dorsal striatum. Movement control. This is the pathway that dies in Parkinson's disease. By the time a Parkinson's patient develops visible tremor, roughly 60-80% of the substantia nigra dopamine neurons are already dead.

Tuberoinfundibular pathway. Hypothalamus to pituitary. Inhibits prolactin release. Antipsychotics that block dopamine here cause the galactorrhea and gynecomastia that sometimes appear as side effects.

When someone says "dopamine," the default assumption is the mesolimbic system, but clinical drugs and diseases act across pathways. Antipsychotics reduce positive symptoms by blocking mesolimbic dopamine but produce Parkinsonism by blocking nigrostriatal dopamine and raise prolactin by blocking the tuberoinfundibular pathway. L-DOPA treats Parkinson's motor symptoms but can produce impulse control disorders by over-activating the mesolimbic system. Pathway specificity is not a detail; it is most of the clinical picture.

Tonic vs. phasic firing

Dopamine neurons have two firing modes, and they do different things. Tonic firing is a slow, steady background of 2-5 Hz that maintains a baseline level of extracellular dopamine. Phasic firing is brief bursts of 10-30 Hz that produce rapid dopamine transients lasting seconds.

The prediction-error signal lives in phasic firing. The motivational and attentional "gain" of the system — how hard you work for things, how focused you are — lives in tonic firing. Stimulants raise tonic dopamine. A novel reward produces a phasic burst on top. You can have normal phasic responses with low tonic tone (classical ADHD picture) or abnormal phasic responses with normal tone (some addictive vulnerabilities).

This distinction matters because interventions act on these differently. Cold exposure and caffeine raise tonic dopamine modestly over hours. Cocaine raises both tonic and phasic sharply. Amphetamines flood synapses with dopamine from vesicles regardless of firing pattern. SSRIs do not directly touch dopamine but change the tonic-to-phasic ratio indirectly through serotonergic effects on VTA.

Receptors D1 through D5

Dopamine binds to five receptor subtypes, grouped into two families. D1-like receptors (D1 and D5) are excitatory and coupled to Gs protein. D2-like receptors (D2, D3, D4) are inhibitory and coupled to Gi protein. They are not redundant; they mediate different functions and respond to different concentrations of dopamine.

D1 receptors have low affinity and respond to the high dopamine levels produced by phasic bursts. They drive "go" signals in the direct striatal pathway — approach behaviors, reward learning, motor initiation. D2 receptors have high affinity and are already partly activated by tonic levels; they drive the "no-go" indirect pathway and are preferentially engaged when dopamine levels drop below baseline (negative prediction errors).

This asymmetry is part of what makes dopamine a bidirectional signal. The same system encodes both "better than expected" (via D1 activation) and "worse than expected" (via D2 activation) using the same neurotransmitter. Clinical drugs target these differently. First-generation antipsychotics block D2 indiscriminately. Second-generation antipsychotics are mostly D2 antagonists with additional serotonergic effects. Stimulants indirectly activate both D1 and D2. Parkinson's dopamine agonists (pramipexole, ropinirole) preferentially target D2/D3 and are notorious for triggering compulsive gambling, hypersexuality, and binge eating.

When dopamine goes up

Food, sex, novelty, achievement

The natural drivers of dopamine release are straightforward: unexpected food rewards (particularly sugar and fat), sexual activity and sexual novelty, new environments and new information, and achievement of challenging goals. Schultz-style prediction errors in all of these.

A few numerical anchors from microdialysis studies in rats (human numbers are harder to get, but the relative magnitudes carry over):

  • Palatable food: ~150% of baseline extracellular dopamine in nucleus accumbens
  • Sex: ~200%
  • Cocaine: ~400%
  • Methamphetamine: ~1000%

These are peak increases over baseline, not sustained levels. They give a rough sense of scale: drugs of abuse produce dopamine elevations several-fold larger than anything the system evolved to handle.

Novelty is its own story. The mesolimbic system responds to unpredictability independent of reward value. A new environment produces dopamine release even if nothing rewarding happens in it. This is adaptive — unpredictable environments are where new information and new opportunities live — and it is also why scrolling through an infinite feed of content you don't particularly care about can feel compulsive. The feed is engineered novelty.

Achievement of difficult goals produces some of the largest sustained dopamine responses available to a healthy brain. The catch is that the magnitude scales with the prediction error, not the absolute value. An expected goal achievement produces little response. An unexpected one produces a lot. This is part of why people who only pursue highly probable wins report feeling increasingly flat — they have trained their prediction error down to zero.

Drugs of abuse — stimulants, nicotine, alcohol

All drugs with substantial abuse liability converge on mesolimbic dopamine, but they get there by different routes.

Cocaine blocks the dopamine reuptake transporter, so dopamine released by phasic firing lingers longer in the synapse. The pattern of release is preserved but amplified.

Amphetamines (including methamphetamine) reverse the dopamine transporter, dumping dopamine from vesicles into the synapse independent of firing. The signal is divorced from the prediction-error structure, which is part of why amphetamine use is so disruptive to normal reward learning.

Nicotine acts on nicotinic acetylcholine receptors on VTA dopamine neurons, driving firing directly. The dopamine release is smaller in magnitude than cocaine or meth but highly reliable on every dose, and the short half-life (~2 hours) produces the frequent-redosing pattern characteristic of cigarette smoking.

Alcohol releases dopamine through multiple indirect mechanisms, including disinhibition of VTA neurons via GABAergic effects. The dopamine response is delayed and less sharp than stimulants, which is why alcohol produces intoxication with a blurred reinforcement signal.

Opioids produce dopamine release indirectly, but importantly, opioid-induced euphoria is mostly a direct effect of mu-opioid activation in the hedonic hotspots — it actually is a "liking" drug in Berridge terms, not purely a "wanting" drug. This is why opioid addiction has a different phenomenology than stimulant addiction.

All of these drugs produce tolerance through receptor downregulation and through changes in the signaling machinery downstream of the receptor. Chronic use leaves the system less responsive to ordinary rewards and more driven by the drug itself, which is one of the core features of addiction.

Supplements with actual data (tyrosine, mucuna pruriens narrow case)

Most "dopamine boosting" supplements are nonsense. A few have narrow, real effects.

Tyrosine. Tyrosine is the precursor to dopamine synthesis. Supplementation with 100-150 mg/kg raises plasma tyrosine and has shown small but real cognitive and mood benefits in conditions of acute stress, sleep deprivation, and cold exposure — conditions where dopamine synthesis can become rate-limited by precursor availability. It does not meaningfully raise baseline dopamine in rested, well-fed people, and it has not been shown to treat depression or ADHD. The effect size is modest and the population is narrow.

Mucuna pruriens. Contains L-DOPA, which crosses the blood-brain barrier and is converted to dopamine. In Parkinson's patients, it is a pharmaceutically active treatment (though inferior to controlled-release formulations of pure L-DOPA). In healthy people, the L-DOPA dose from standardized extracts is too low to produce central effects in most cases, and chronic use is not well-studied. Some men use it for fertility, where there is weak evidence for improved sperm parameters — this is through prolactin lowering, not a CNS effect. Not a recommended supplement for general use.

Creatine. Has some evidence for cognitive and mood benefits, particularly in vegetarians and the sleep-deprived, but the mechanism is probably energetic rather than dopaminergic. Good supplement for other reasons.

The things that are marketed as dopamine boosters — various herbal stacks, nootropic blends, "focus formulas" — are mostly caffeine plus L-theanine plus marketing. If something in a bottle is going to substantially raise your dopamine, it is a controlled substance or it isn't working.

When dopamine goes down

Parkinson's and the substantia nigra

Parkinson's disease is the clearest picture of what happens when dopamine falls. Progressive death of dopamine neurons in the substantia nigra produces the classic motor triad — bradykinesia, rigidity, resting tremor — along with non-motor symptoms that often predate the movement changes by years: apathy, depression, loss of motivation, olfactory loss, REM sleep behavior disorder, constipation.

The non-motor symptoms are instructive because they show dopamine loss outside its role in movement. Apathy in Parkinson's is not depression — it is a specific reduction in motivated behavior, exactly what you would predict from reduced mesolimbic tone.

L-DOPA replacement treats motor symptoms well for years but does not stop the underlying neurodegeneration. Over time, fluctuations develop (on-off phenomena), and L-DOPA itself produces dyskinesias in most patients after 5-10 years of treatment. Dopamine agonists (pramipexole, ropinirole) are alternatives but are the drugs that produce impulse control disorders — compulsive gambling, hypersexuality, binge eating, compulsive shopping — in roughly 15% of patients. The mesolimbic-impulse-control link shows up dramatically here.

Depression and anhedonia

The relationship between depression and dopamine is more complicated than popular accounts suggest. Serotonin dominates antidepressant pharmacology, and monoamine theory in general is now regarded as incomplete. But a meaningful fraction of depression is specifically dopaminergic.

Anhedonia — the loss of pleasure and interest — tracks with reduced dopaminergic function better than with serotonergic function. Patients with high anhedonia scores respond less well to SSRIs and sometimes better to dopamine-active drugs like bupropion (a dopamine and norepinephrine reuptake inhibitor), stimulants as augmentation, or MAOIs.

The wanting-versus-liking distinction shows up clinically. Many depressed patients have preserved hedonic capacity — they can enjoy something once they get to it — but lost motivation to pursue anything. That is a wanting deficit, mesolimbic rather than hedonic, and it is what makes depression different from sadness.

Dopamine downregulation from chronic overstimulation

Chronic high dopamine — from drugs of abuse, from compulsive behaviors, from stimulant medication — downregulates D2 receptors. PET imaging in people addicted to cocaine, methamphetamine, heroin, and alcohol all shows reduced D2 receptor availability in the striatum, and the effect is proportional to the severity and duration of use.

Whether the same thing happens from non-drug chronic overstimulation is an open question. PET studies in pathological gamblers and in heavy pornography users have suggested similar patterns, but the data are much thinner and the causal direction is unsettled. It is plausible that compulsive short-form video consumption, for example, downregulates D2 receptors in a similar way, but the experimental evidence is not yet clean.

What is clean is the phenomenology: people who spend months or years in a state of constant high-dopamine stimulation report diminishing response to ordinary life. Food is less interesting. Exercise is less rewarding. Books can't hold attention. Sex is less exciting than pornography. This is downregulation, and it is reversible, but the timescale is weeks to months of reduced stimulation, not days.

The "dopamine detox" reconsidered

What's wrong with the framing

"Dopamine detox" is a popular wellness concept: abstain from social media, porn, junk food, and other high-stimulation inputs for 24 hours or a week, and your dopamine system will "reset." The name is misleading and the mechanism as sold is wrong.

Dopamine does not accumulate like a toxin, and you do not "flush" it. Baseline dopamine levels do not change meaningfully over a single day of abstinence. Tonic dopamine firing continues whether you scroll Instagram or not. The "reset" framing suggests a return to some natural baseline, which implies that the baseline was the same before you started scrolling and will return to that state once you stop, and that is not how the system works.

The popular version of dopamine detox also invites a particular mistake: trying to minimize dopamine as an end in itself. This is both impossible and undesirable. Too little dopamine looks like Parkinson's, not like nirvana. A monk who has spent thirty years in a monastery probably has normal dopamine levels — he has different behaviors because his reinforcement environment is different, not because his dopamine neurons have been sedated.

What's right underneath

The practice is more defensible than the name. What a "dopamine detox" actually does, when it works, is temporarily remove a set of behaviors that produce large, frequent prediction errors, allowing smaller rewards to become noticeable by comparison.

If your baseline is a day of short-form video, porn, sugar, and push notifications, the prediction-error magnitudes from ordinary activities like a conversation, a walk, or a book fall below the threshold where they feel like anything. Remove the high-stimulation inputs, and the ordinary activities start to register. This is a contrast effect, not a receptor recovery, but it is real and it does not require weeks.

Longer abstinence — weeks to months — may also allow partial D2 receptor upregulation if you were in a chronic high-dopamine state. The evidence is mostly in drug-addicted populations, not in smartphone users, but the mechanism is plausible.

A real protocol (if you want one)

If you want a non-woo version of this, here is what actually works:

  1. Identify the 2-3 inputs in your life that produce the largest and most frequent artificial prediction errors. For most people this is some combination of short-form video, porn, gambling or stock-checking apps, and notification-driven messaging.
  2. Remove them for 2-4 weeks. Not "limit." Remove. Delete the app or use a blocker that requires real friction to disable.
  3. During that period, do not substitute other high-stimulation inputs. Read a book. Walk without headphones. Sit bored.
  4. Add back intentionally. Not all of these inputs are equivalent; a conversation with a friend is not the same category as a TikTok feed even if both produce prediction errors.
  5. If you cannot sustain 2-4 weeks without the input, that is diagnostic information about how much of your wanting system was bound up in it.

This is the same structure as any detoxification protocol from any substance. Call it what you want; the mechanism is habituation reversal plus, maybe, modest D2 recovery.

Focus, motivation, and the modern environment

Why ordinary rewards feel dim

If you spent your evenings ten years ago reading a novel and now you spend them scrolling, and the novel now feels boring, the novel did not change. Your reference point did. Prediction errors scale against recent history. If recent history is filled with 90th-percentile stimuli, 50th-percentile stimuli produce negative prediction errors — they feel worse than neutral, not neutral.

This is the core of why "nothing is interesting anymore" is a common complaint in people with high-stimulation media diets. The complaint is usually framed as depression or ADHD, and sometimes it is, but often the simpler explanation is that the prediction-error calibration has drifted. The fix is not a stimulant; it is a period of reduced stimulation to let the calibration re-anchor.

Reward prediction, novelty, variable reinforcement

Three features of an input produce the strongest dopaminergic engagement:

  1. Variable reinforcement. The reward is unpredictable. Slot machines, social media feeds, email, stock prices, dating apps. Fixed schedules (the kind of thing evolved rewards were mostly on) produce smaller dopamine responses than variable ones.
  2. Novelty. Each item is new relative to recent experience. Infinite-scroll feeds engineered for novelty saturate this channel.
  3. Low cost of sampling. The friction to the next potential reward is near zero. You don't have to hunt; you just swipe.

Engineered modern inputs maximize all three. Most traditional activities — reading, exercise, conversation, skilled work — score high on only one or two. That is not a reason to avoid engineered inputs entirely, but it is a reason to understand why some activities demand attention from you and others require you to force attention onto them. The force-attention-on-it activities are usually the ones with durable rewards.

Dopamine and ADHD

The stimulant model

Attention deficit hyperactivity disorder maps approximately to reduced tonic dopamine tone in prefrontal and striatal circuits. PET imaging shows reduced dopamine transporter availability in some ADHD populations, and the clinical response to dopaminergic stimulants is the most reliable treatment effect in psychiatry — roughly 70-80% of adults with ADHD respond substantially to methylphenidate or amphetamine-class drugs.

The model is not "ADHD patients have low dopamine and stimulants raise it." That is too simple. The more careful model is that ADHD involves a disorganized relationship between tonic tone and phasic signaling, producing difficulty maintaining task-relevant attention when the task is not highly stimulating. Stimulants raise tonic tone, which increases the gain on task-related signals relative to distractors.

The counterintuitive fact about stimulants in ADHD is that they make people less excitable, not more. An adult with ADHD starting methylphenidate typically feels calmer, more focused, and less driven toward compulsive distraction. The same dose in someone without ADHD tends to feel energizing and mildly euphoric. Dopaminergic systems in the two populations are not in the same starting state.

Why Adderall and Ritalin work differently

The two first-line ADHD medication classes act on dopamine differently enough that patients often respond to one and not the other.

Methylphenidate (Ritalin, Concerta) blocks the dopamine and norepinephrine reuptake transporters. It amplifies the signal of whatever the dopamine system is doing — tonic and phasic firing both become more effective. Clean mechanism, closer to what cocaine does but with much slower pharmacokinetics (oral, extended-release formulations produce steady rather than spiking plasma levels).

Amphetamines (Adderall, Vyvanse, Dexedrine) also block reuptake but additionally reverse the dopamine transporter, causing dopamine release independent of firing. The effect is larger and less dependent on the underlying firing pattern. For some patients, this is better. For others, the divorce of dopamine release from signal produces a feeling of "being stimulated but not focused."

Adderall contains mixed amphetamine salts; Vyvanse (lisdexamfetamine) is a prodrug activated in the body, producing smoother pharmacokinetics and lower abuse liability. Dexedrine is pure dextroamphetamine.

The choice between methylphenidate and amphetamine is not usually a rigorous one; it is often whichever the prescriber starts with, and patients switch if the first does not work. Response rates for each alone are around 65-75%; the combined response rate across both is closer to 90%. This is one of the better drug-class pairs in psychiatry.

A note on abuse: both classes are Schedule II controlled substances with real abuse potential, particularly amphetamines. Therapeutic use at prescribed doses in people with actual ADHD does not appear to produce meaningful receptor downregulation or addiction in most patients. Recreational use at supratherapeutic doses is a different drug entirely, and the long-term risks are real.

Honest take

Honest Take

Almost every popular claim about dopamine is at least partly wrong. It is not the pleasure molecule — it is the wanting signal, and wanting and pleasure are separate systems that can dissociate, which is why people can crave things they no longer enjoy. "Dopamine detox" does not flush anything out, but the practice of removing the largest artificial sources of prediction error from your environment for two to four weeks does work, for reasons the name misrepresents. The idea that you can "hack" dopamine with supplements is mostly cope — tyrosine under narrow conditions, not much else — and the things sold as focus formulas are caffeine plus theanine plus marketing. The real lever is your reinforcement environment: what you spend time on is what your wanting system trains on, and the engineered high-variance inputs that dominate modern attention are genuinely, mechanistically corrosive to interest in ordinary activities. If you have ADHD, stimulants work and are under-stigmatized among people who need them and over-stigmatized among people who don't; if you don't have ADHD, chasing focus with prescription stimulants is a worse trade than it looks. The non-obvious move for most people with a "motivation problem" is not to raise dopamine but to lower the average prediction-error magnitude of their daily inputs so that ordinary rewards re-enter the detectable range. Nothing in a bottle does that for you. The boring levers — sleep, consistent exercise, morning light exposure, removing the three or four apps that own your attention — are almost the whole game.

Sources

  • Schultz W., Dayan P., Montague P.R., Science (1997) — A neural substrate of prediction and reward. The foundational reward-prediction-error paper.
  • Berridge K.C., Robinson T.E., Brain Research Reviews (1998) — What is the role of dopamine in reward: hedonic impact, reward learning, or incentive salience? The wanting-vs-liking framework.
  • Volkow N.D. et al., Archives of General Psychiatry (2001) — Loss of dopamine transporters in methamphetamine abusers. Imaging evidence for chronic dopamine system changes with stimulant abuse.
  • Howes O.D., Kapur S., Schizophrenia Bulletin (2009) — The dopamine hypothesis of schizophrenia: version III. Current understanding of pathway-specific dopamine dysfunction.
  • Schultz W., Nature Reviews Neuroscience (2016) — Dopamine reward prediction-error signalling: a two-component response. A more recent update with mechanistic detail on phasic vs. tonic signaling.
  • Volkow N.D., Wise R.A., Baler R., Nature Reviews Neuroscience (2017) — The dopamine motive system: implications for drug and food addiction. Covers cross-substance convergence on mesolimbic dopamine.
  • Weintraub D. et al., Archives of Neurology (2010) — Impulse control disorders in Parkinson disease: a cross-sectional study. Evidence for dopamine agonist-induced compulsive behaviors.
  • Cortese S. et al., Lancet Psychiatry (2018) — Comparative efficacy and tolerability of medications for ADHD in children, adolescents, and adults. Network meta-analysis of stimulant and non-stimulant treatments.
  • Hoeks J. et al., Biological Psychiatry (2014) — Anhedonia and reward-related functional brain response in major depressive disorder. Dopaminergic specificity of anhedonia.
  • Salamone J.D., Correa M., Neuron (2012) — The mysterious motivational functions of mesolimbic dopamine. Review arguing against the dopamine-as-pleasure model.

Watch

Controlling Your Dopamine For Motivation, Focus & Satisfaction
Huberman Lab