Hendershot rationalized the lie by saying the doping process was overseen by Max Testa, an Italian doctor who is still working in the sport and running a sports medicine clinic in Utah. In 2006, Testa told me that he gave his riders the instructions to use EPO but never administered drugs to those riders. In 2014, he said he didn’t want to discuss anything about the cyclists he had worked with, to protect the privacy of his patients. Still, if drug use was not mandated by the team, it appeared to be at least quasi-official. Hendershot trusted Testa to make sure the riders were staying safe, believing that Testa — unlike other doctors in cycling — actually cared for the riders’ health, and cared less about winning or money.
Hendershot, however, put it this way: A doctor who refused to give riders drugs wouldn’t last in the sport.
Armstrong liked Testa so much that he moved to Italy to be near the doctor’s office in Como, north of Milan. Not long after joining Motorola, Armstrong began living in Como during the racing season. He brought along his close friend Frankie Andreu, and in time several other riders joined them, including George Hincapie, a New Yorker, and Kevin Livingston, a Midwesterner. All became patients of Testa. All later became riders on Armstrong’s Tour de France-winning United States Postal Service teams.
So, we're gonna point some fingers pretty soon, right?
All this maths and statistics is fun. No, really, it is - at least, when I can keep up with it. But what I’m looking forward to is somebody shouting ‘doper!’ at the top n riders in a GT and having some maths to back them up. Bring on the Giro!
(…you will be shouting ‘doper!’, right? That’s still the point of this little experiment, or did I miss something?)
shouting doesn’t seem to have much impact
but I’ve been asked to consult on an antidoping project involving several teams in a region with very high doping prevalance
my goal is to develop a
that significantly tightens the speed limit on doping
“1: Ma D, Lim T, Xu J, Tang H, Wan Y, Zhao H, Hossain M, Maxwell PH, Maze M. Xenon
preconditioning protects against renal ischemic-reperfusion injury via HIF-1alpha
activation. J Am Soc Nephrol. 2009 Apr;20(4):713-20. doi: 10.1681/ASN.2008070712.
Epub 2009 Jan 14.”—
seriously how can you keep up ?
or apparently for several years
has been used to dope athletes
through the effects on hypoxia inducible factors
their downstream target EPO
while I’m a bit skeptical
about how well this mouse data translates to humans
Hi Dr. Veloclinic. I don't understand these models until I see them approximately applied in the real world. Is it possible to apply your Holy Grail of Endurance Capacity model to a semi-weekly stress test? For example, a 10 minute time trial effort over the same stretch of fire road? Let's just pretend that environment isn't a factor and my GPS computer is really accurate. It's possible I just don't understand it too. Thanks for your work and patience.
if you want to compare one 10 minute TT to another 10 minute TT teh best way is to compare them directly without any model.
models come in if you want to start comparing efforts of different lengths say a 10 minute vs a 20 minute TT or if you want to predict a 30 minute TT from a 10 minute TT.
the intermittent model comes in to play if you want to understand repeated TT separated by intervals of relative rest. for example if you did 3 10 minute TTs separated by 2 minutes of rest and wanted to compare it to 2 20 minute TTs separated by 5 minutes etc.
How useful do you think lab tests, like VO2, are in determining a person's talent & or potential? Especially for someone who isn't pro, but isn't a couch potato either and who obviously isn't a genetic freak (like maybe Sagan, or Ero Mäntyranta)? Do you think it is something that can be determined, or do you think potential is something that's more unknowable and can't be determined from such tests?
lab tests are a reasonable screening tool from a coach DS perspective
as they can provide a sense of an athletes ceiling
(now to be truthful, i have no way of knowing what the WKO4 model actually is as that’s a secret but since you now have a non-secret model that gives the same curve fit you probably want to use the non-secret one)
cascading tau and cascading time ranges as an option to solve start parameters for Damien's multi-tank Golden Cheetah model
in the previous posts i introduced the Wilkie correction as well as suggested the scalability.
this post just refines those thoughts a touch and suggests an approach for Damien to solve for the start parameters for the upcoming multi-tank Golden Cheetah model which can then be further optimized in his model.
where at the very long time points where the sugar/fat subdivision is necessary the contribution from the low capacity system will be negligible so that it can be dropped from the equation along with the ramp phase of the high capacity system:
an HC - capacity system could be added so that now we have a solvable equation for an infinitely scalable Margaria fluid model.
mathematically these modifications don’t actually change the shape of the total predicted curve (unless you add a component) instead they just let you subdivide the area under the curve for a possibly more realistic understanding of the contribution of the underlying components.
the code then if you want to compare the Flock-KIMECO Wilkie modifciation tau3 = 15 second with the with Ward - Smith simplification then is:
#start here for Wilkie mod fixed 15 vs Wilkie mod vs Wilike mod with single tau1 simplification
pow<- read.csv(“pow22.csv”) attach(pow)
x <- Time y <- Power dspow <- data.frame(x = x, y = y)
## Example 1 ## brute force followed by nls optimization fo <- y~pow1*tau1/x*(1-exp(-x/tau1)) + pow2*tau2/x*(1-exp(-x/tau2)) + -pow2*15/x*(1-exp(-x/15))
# pass our own set of starting values # returning result of brute force search as nls object4 nls2 st1 <- expand.grid(pow1 = seq(300, 2000, len = 10), tau1 = seq(5, 60, len = 10), pow2 = seq(150, 500, len = 11), tau2 = seq(2500, 100000, len = 11)) mod1 <- nls2(fo, start = st1, data = dspow, algorithm = “brute-force”) mod1 # use nls object mod1 just calculated as starting value for # nls optimization. Same as: nls(fo, start = coef(mod1)) rpow <- nls2(fo, start = mod1) plot(log=”x”, x,y) lines(x, predict(rpow)) presid <-resid(rpow)/y*100 plot (log=”x”, x,presid) rpow
## Example 1 ## brute force followed by nls optimization fo <- y~pow1*tau1/x*(1-exp(-x/tau1)) + pow2*tau2/x*(1-exp(-x/tau2)) + -pow2*tau1/x*(1-exp(-x/tau1))
# pass our own set of starting values # returning result of brute force search as nls object4 nls2 st1 <- expand.grid(pow1 = seq(300, 2000, len = 10), tau1 = seq(10, 20, len = 5), pow2 = seq(150, 500, len = 11), tau2 = seq(2500, 100000, len = 11)) mod1 <- nls2(fo, start = st1, data = dspow, algorithm = “brute-force”) mod1 # use nls object mod1 just calculated as starting value for # nls optimization. Same as: nls(fo, start = coef(mod1)) rpow2 <- nls2(fo, start = mod1) plot(log=”x”, x,y) lines(x, predict(rpow2)) lines(x, predict(rpow)) presid2 <-resid(rpow)/y*100 plot (log=”x”, x,presid2) lines(x, presid) rpow