Can anyone recommend a decent decaff coffee? I've been off caffeine for a couple of months now and am really struggling to find something I like. I tried the only decaff beans that http://www.hasbean.co.uk/ do and I wasn't too keen. Generally, when I have a decaff coffee 'out' it always seems pretty good.
Decaf? Nearly puntered you for that, but hopefully you're just clearing out your receptors before dong some huge alpine push where you need all the advantage that caffeine can give you a la mark twight.
Quote from: T_B on October 03, 2013, 03:58:00 pmCan anyone recommend a decent decaff coffee? I've been off caffeine for a couple of months now and am really struggling to find something I like. I tried the only decaff beans that http://www.hasbean.co.uk/ do and I wasn't too keen. Generally, when I have a decaff coffee 'out' it always seems pretty good.Decaf? Nearly puntered you for that, but hopefully you're just clearing out your receptors before dong some huge alpine push where you need all the advantage that caffeine can give you a la mark twight.
I’ve examined other coffee variables using a similar experimental approach and found only a few factors that had any measurable effect on coffee flavor in isolation. For example, variables like bean freshness or bean purveyor has little effect on flavor. As a result of these experiments, my brewing setup is simple, quick, and inexpensive. I buy the cheapest whole-bean shade-grown coffee I can find in my preferred roast. To brew a cup of coffee, I grind the beans with a blade grinder and brew with the Aeropress. The Aeropress and blade grinder can be found on Amazon for around $25. The entire brewing process takes about 5 minutes and produces great coffee. Until I obtain convincing evidence that support investing additional money in brewing accouterments, I see little reason to deviate from this system. In the words of Carl Sagan, extra-ordinary claims require extra-ordinary evidence.
In 2007, Richard E Quandt, a Princeton economics professor, published a paper entitled "On Wine Bullshit: Some New Software?" The study sought to describe the "unholy union" of "bullshit and bullshit artists who are impelled to comment on it", in this case wine and wine critics. Quandt compiled a "vocabulary of wine descriptors" containing 123 terms from "angular" to "violets" via other nonsense descriptions such as "fireplace" and "tannins, fine-grained".Then, with the help of colleagues, he built an algorithm that generated wine reviews of hypothetical wines using his "vocabulary of bullshit". For instance: "Château L'Ordure Pomerol, 2004. Fine minerality, dried apricots and cedar characterise this sage-laden wine bursting with black fruit and toasty oak." He concluded that whether his reviews were "any more bullshit" than real ones was a "judgment call". Sadly, he didn't explore how long it would take a monkey to type a wine review.
I love the tale of Steve Job's 'reality distortion field' in action, and this quote:QuoteIn 2007, Richard E Quandt, a Princeton economics professor, published a paper entitled "On Wine Bullshit: Some New Software?" The study sought to describe the "unholy union" of "bullshit and bullshit artists who are impelled to comment on it", in this case wine and wine critics. Quandt compiled a "vocabulary of wine descriptors" containing 123 terms from "angular" to "violets" via other nonsense descriptions such as "fireplace" and "tannins, fine-grained".Then, with the help of colleagues, he built an algorithm that generated wine reviews of hypothetical wines using his "vocabulary of bullshit". For instance: "Château L'Ordure Pomerol, 2004. Fine minerality, dried apricots and cedar characterise this sage-laden wine bursting with black fruit and toasty oak." He concluded that whether his reviews were "any more bullshit" than real ones was a "judgment call". Sadly, he didn't explore how long it would take a monkey to type a wine review.
Yeah I owned that little book when I lived in Canada but lost it on moving back to UK. I really should hunt down another copy it's a great guide.
http://www.drbunsen.org/coffee-experiments/
But to run a trial and claim to come to conclusive, somehow objective, results about what constitutes a good coffee is bullshit. To do it by trialling with subjects who aren't in fact interested connoisseurs is even worse bullshit.
QuoteBut to run a trial and claim to come to conclusive, somehow objective, results about what constitutes a good coffee is bullshit. To do it by trialling with subjects who aren't in fact interested connoisseurs is even worse bullshit.One of his points (implied by the wine-tasting link) is that 'interested connoisseurs' can't tell the difference between supposed superior/inferior product and have kidded themselves and others that they can - the hilariously-labelled 'reality distortion field', whilst inventing a lexicon of bullshit to make their 'knowledge' sound legit.He also says his guests/guinea pigs all own burr grinders - he had to borrow them for his experiment as he only owns a blade-grinder - implying the guinea-pigs are all pretty keen on their coffee (N.A. is still well ahead of the UK in terms of people being well into their coffee). Yet in the blind tests the guests didn't show any preference for burr-ground, instead showing a slight preference for blade-ground.
I think you've over interpreted the aim of the work Sam.....I read it as a bit of fun with some formal numerical analysis to reassure himself that there is no need to buy an expensive coffee machine. The sample size is woefully inadequate to make any generalisations as you've noted.But I bet you there are some people out there who will go out and spend $11000 and claim it makes the most amazing coffee in the world (because they have to justify their expenditure to themselves, and they are likely regurgitating the advertising blurb).@Pete : You might also enjoy "Irrationality : Why we don't think straight" by Stuart Sutherland (not overly technical but then it is a popular science book, but has some great examples).
I highlighted the solid science in bold there. To me, it could equally be that they're middle class sheeple.
Increasing the sample size wouldn't solve the fundamental problems with overly-general implications he is drawing from the data.
Quote from: psychomansam on October 09, 2013, 03:48:37 pmI highlighted the solid science in bold there. To me, it could equally be that they're middle class sheeple.Presumably meaning that you think people who go out to buy burr-grinders on the recommendation of some guy telling them that 'blade grinders scorch the beans' are 'middle class sheeple'... Again, I'm not sure how you can tell that his subjects aren't 'interested connoisseurs'. You can't. And the point of his fun little experiment is not to be the 'conclusive' last word, but rather to help highlight how there's a lot of guff talked by 'connoisseurs' about what equipment and ingredients are required for great-tasting coffee (and wine). His advice - cheapest shade-grown beans, cheap blade grinder and an aeropress = excellent coffee which his guinea pigs found preferable to coffee made using more expensive grinders and beans.Keep the good book pointers coming Slackers!
Chop the larger sample size up anyway you like as long as you pre-specify your intentions and don't dredge the data, as over-stating the claims of sub-group analysis can land you in serious trouble (at least in the US).
He should have pointed out his research findings were probably false anyway...
Quote from: petejh on October 09, 2013, 05:38:18 pmHe should have pointed out his research findings were probably false anyway...When they did look in the highlighted subset there was no effect.Keeping things but on the subject of false findings...Why Most Published Research Findings Are False by John Ionnadis in PLoS One (open source access).Related articles...When Should Potentially False Research Findings Be Considered Acceptable?Most Published Research Findings Are False—But a Little Replication Goes a Long WayAnd a serious attempt to Estimate the science-wise False Discovery Rate (Type I Error/False Positive)One of the authors blogs about it here and indicates the paper should be open access, there is also some communication that was published in the journal in response to the article linked from it, including Ionnadis who wrote the first article linked above
Quote from: psychomansam on October 09, 2013, 04:17:22 pmIncreasing the sample size wouldn't solve the fundamental problems with overly-general implications he is drawing from the data.Yes it does, with larger sample sizes you can then perform more robust and valid sub-group analysis of for example "What are the factors that influence coffee preference in hypersensitive tasters?" (the 'Psychomansam sub-group') and compare and contrast this with "What are the factors that influence coffee preference in hypo-sensitive tasters?" (the 'Psychomansam's heathen house-mates sub-group').Chop the larger sample size up anyway you like as long as you pre-specify your intentions and don't dredge the data, as over-stating the claims of sub-group analysis can land you in serious trouble (at least in the US).Thats all rather though, its only coffee and shouldn't be taken too seriously, regardless of the wording of the blog post.
I might read that at some point, but having taken part in about a dozen clinical trials, I'm not sure I even need to. (I have read a bit about the issues before and heard them from Dr friends). Us healthy subjects, we all lied. Loads of the controls were shit too.For instance, trialling a drug where one of the major side-effects is going to be low blood sugar, they had us all on a controlled diet (as for most studies). This means everyone gets the same amount of food and must eat it. Some people on the trial weigh almost double other people on the trial. So some people are struggling not to be sick. Others, like me, collapsed from low blood sugar - a negative side effect they have to report.Then consider that a lot of trials involve 3 day breaks where you go home. You're not allowed to drink or exercise as it'll fuck with the study findings. Yet almost everyone does at least one of the two. I actually almost got kicked off a trial once because I'd spent 3 hours at the works the day before, but I got away with it.Then there's the multiple phase studies, where you get to stay longer and get more money only if you don't have bad side effects. So you don't report the side effects....Tip of the iceberg.