Notes on a variety of things in no order.
Notes I: Papers, posts, ideas, information
Notes II: Economics, medicine, (geo)politics, technology, science
Notes III: Books, podcasts
Notes IV: Electronics
Notes V: Miscellaneous
The notes below are designed for ease-of-input, i.e., I want there to be little in the way of entering them besides links and quotes. Will I reference them later? I don't know. Can I reference them later? Yes.
Six-month policies allow adjustments of premiums based on the previous six months: more frequent adjustments generally result in higher premiums.
The increase in these white blood cells affects your hypothalamus. This makes your body heat up, causing a fever. In the early stages of a fever, you often feel cold and start to shiver. This is your body’s response to a rising temperature. The blood vessels in your skin tighten up (constrict), forcing blood from the outer layer of your skin to inside your body where it is easier to keep the heat in. The outer skin layer then becomes cool and your muscles start to contract. This makes you shiver. Shivering produces more heat and raises your temperature even more.
Symmetrical tire tread patterns deliver smooth driving, high directional stability, and low rolling resistance.
Directional tire tread patterns provide high protection against aquaplaning, excellent handling on snow and mud, and very good road-holding at high speeds.
Asymmetric tire tread patterns offer excellent handling, high curve stability, and good grip in wet conditions.
The combination of high thermal conductivity and specific heat coupled with chemical inertness gives helium unique advantages over any other gas as a reactor coolant.The Wikipedia page on helium cryogenics has more details.
https://www.youtube.com/watch?v=au3l5F03JDc 80 watts can cut a sausage if you move it slow enough, and there will be scorch marks and plasma flashes. https://www.youtube.com/watch?v=W6FbUiiwutQ A 100W laser is already dazzling (tbh, it's an array of less powerful lasers, like 11W each). A laser powerful enough to slice off your head would blind everyone even without a plasma flash. Maybe an IR laser would be less conspicuous, but human flesh is bad at absorbing short wave IR.
I can't be arsed to type it all out on my phone, and the infinitely-thin-blade effect is a nonstarter because a steam explosion would cause people to explode, but 10s of kilowatts would definitely get you somewhere useful.
The plan proposes the insertion of 00 or 11 between the NPA and the NXX, to produce 12-digit numbers. The plan further proposes that the US would use either 00 or 11, while Canada would use the other, in order to allow customers to distinguish countries by use of these digits, which do not appear at the beginning of the 12-digit number. This distinction would quickly vanish as digits other than 0 and 1 are used in these positions after permissive dialing ends.
For these examples, it is assumed that the new digits will be 00 for the US, and 11 for Canada. With these assumptions, under this plan, the New Jersey telephone number 609-555-0175 would become 6090-0555-0175, and would be dialed as such.
There are two fundamentally different components of tape's sticky nature; adhesion and cohesion. Adhesion is the binding force between two different materials, whereas cohesion is the binding force between two similar materials. When two materials are brought into contact with each other, the surface molecules interact, giving rise to attractive forces that may be physical, chemical or electrostatic (corresponding to adsorption, covalent bonding or van der Waals forces, respectively). When the molecules are similar, as in the case of two 'glue molecules,' the cohesive force causes the glue to stick to itself. When the molecules are dissimilar, as in the case of a glue molecule and a molecule of the substrate (the surface the glue is sticking to), the adhesive force holds the glue to the substrate. Hence, the 'stickiness' of tape is caused by a combination of the molecular forces of the glue material sticking to itself as well as holding onto the substrate.
Things applied in a non-linear manner do not scale linearly, i.e. when the 'Surface to volume ratio' matters, the recipe will not scale linearly. See http://kitchenscience.sci-toys.com/scaling for a discussion mostly on how the timings are affected. One example is breading: You will not need to double the breading linearly on a single 200g piece of beef/chicken compared to a single 400g piece, since the surface area will not change with the same factor as the weight, i.e. the surface area will not double when the weight doubles. On the other hand, if you use twice as many pieces of chicken, you will need twice as much breading.
"Software structure" exists to benefit human developers. An AGI, at least within the current paradigms of AI, would be self-evolved through enormous amounts of unsupervised learning. The resulting software artefact would be "written" by and for a machine, so to begin with not really "written" in a programming language, as a such brings no benefit to the machine. So it would look like a very large obfuscated compiled binary, but it would be even harder for humans to decompile and reverse-engineer than such binaries usually are (which is very), because it hasn't been written by a human whose intentions could be hypothesised in order to work out the structure.
Probably a giant spreadsheet of weights encoding a giant neural net. What this net would run on is much less important, as long as self-modification is supported.
Instead, the return trip effect is likely due to a violation of expectations. Participants felt that the initial trip took longer than they had expected. In response, they likely lengthened their expectations for the return trip. In comparison with this longer expected duration, the return trip felt short. The greater the participants’ expectations were violated on the initial trip, the more they experienced the return trip effect
I asked this question on r/TheMotte's Small-Scale Question Sunday for the week of August 16, 2020:
How do I get better at determining if a paper's methods and results are reliable and robust? Often when reading a paper (mostly social sciences), I find myself agreeing with their methods and findings, only to find someone who is seemingly qualified rip the methods and results to threads.
My probability and statistics backgrounds are average and rudimentary, respectively, which I think has something to do with it. Would learning more about statistics/experimental methods be the best course of action? If so, any suggestions on resources (textbooks, lectures, etc.)? DeGroot and Schervish's Probability and Statistics seems reputable, but I'm open to other recommendations.
The answers I got were (each blockquote is an individual user's answer):
- Look for confounding effects and whether they were controlled for. All good papers should mention them -- if they don't, it's an indication of agenda. For example, hours worked for the 'earnings gap', power in 'interrupting', Socio-economic-status indications in name studies.
- You can pretty much dismiss studies that use self-selection (an internet survey) or self-report (e.g. dating preferences).
- p levels near 0.05 and low or unreported effect sizes are suspicious (but this seems a rarer form of deception).
your question remind me of a paul graham quote, which i think is very much applicable to academic papers - and my opinion is that it's more likely to be useful than digging into probability and statistics (although, that's not a waste of time): "Remember the exercises in critical reading you did in school, where you had to look at a piece of writing and step back and ask whether the author was telling the whole truth? If you really want to be a critical reader, it turns out you have to step back one step further, and ask not just whether the author is telling the truth, but why he's writing about this subject at all." -- Paul Graham
Some approaches I take:
- Think about how the results could have been achieved by something other than the proposed mechanism. Look for what is unobserved, much of which will be due to unobservables. For example, if cancer rates are higher in (say) soda drinkers, one could easily craft an explanation that there's some general propensity for unhealthy behavior that causes both soda drinking and other unhealthy eating. Or that poverty correlates with soda consumption and exposure to lead paint.
- Related to (1): think about causal diagrams. LessWrong has a long description here. The most simplified case: if A and B are correlated, then A could cause B, B could cause A, or some unobserved C could cause A and B.
- Look not just at statistical significance, but practical significance. You can have significant p-values with effect sizes nobody would care about.
- For studies demonstrating negatives, think about sample and effect sizes. It's very hard to demonstrate that something causes a low-probability event (like death or cancer) without huge sample size, so finding "no evidence for" low-probability events is easy.
- Critically evaluate the incentives of the researcher and how they might come through in the published findings. Even with a large-sample-size randomized controlled trial, researchers have broad latitude over which effects to evaluate, whether outliers get trimmed, which subgroups to evaluate, and whether they publish their studies at all. If you see that data collection stopped early, that a drug impacts skin cancer rates when it could've just have easily been tested on 20 other cancers, that an effect is found in men 18-24 but in no other subgroup, etc., this is evidence for p-hacking.
- Evaluate selection bias. Is there something fundamental in how people were chosen for a study that makes it non-random to begin with? Would they attrit from a study in a non-random way?
- Look for confirming or contradictory research. Even if there isn't anything studying the exact topic, you can do this in a Bayesian fashion - there may be something that you'd have some level of confidence in generalizing to the topic at hand. For the soda example, if soda is found to be carcinogenic in only one human study and you can find a few other studies showing that the individual ingredients (artificial colors, phosphoric acid) are carcinogenic in mice, this is some level of evidence for their claim.
Honestly? You can't. Not without becoming an expert in the field. From the outside, you could spot egregiously bad statistical errors or total failures of control, but those will be the minority of cases. What look like unfounded assumptions may simply be textbook knowledge so universal in the field and so well replicated that nobody even bothers citing it anymore. Conversely, what seems utterly unremarkable may be a fatal flaw.
I recall one paper I peer reviewed. I loved this paper, right up my alley in an area I was interested in, done by people I know and respect. Everything was great...except they used one wrong drug which tainted all of the results. The number of people who are in a situation to use this drug *and* study the result topic which it damages is tiny, maybe 20 of us in the world. But that was it, the study was dead. To the authors' credit, they didn't re-submit and I've never seen the paper anywhere else, so I think they just tossed it. And they were new to the field, so they simply didn't know.
There are a lot of downsides to peer review, but the biggest upside, and it's a HUGE one, is that these people have deep knowledge of the topic and have often uses these same experimental techniques themselves. They know the quirks, the weird confounders, that one study from an obscure journal 20 years ago that proves you can't study X with this technique even if it works for all the other letters, but also the difficulties and realistic limitations (e.g. working with dangerous chemicals or endangered species).
Learning more about stats wouldn't hurt, but another good approach is reading a lot of teardowns of studies. Go through Gelman's blog archives, for example. There are classes of methodological errors that keep coming up over and over again, and even if you can't identify them on your own the first time you see each one, you can learn to recognize them once you've seen an example or two.
Social blunders:
From Micro Habits that Improve One’s Day:
One of the main ways I managed to instill good habits in myself is to both use optimal paths to good habits, and closing optimal paths to sub-optimal habits. The trick is to make a good habit easier than it is annoying, and a bad habit more annoying than it is preferable. Examples: Hydration - I simply place a 2l water bottle by the apartment door every evening. It becomes impossible for me to leave the house without picking it up, and once it is in my hand, Im so much more likely to drink from it and take it with me than forget. Exercise: I bought dumbbells to work out with, but consciously made no place to put them. I just place them on my gaming chair, so it becomes impossible to use the PC without lifting the dumbbells. But the moment they are literally in my hands, it is easier to just pump a few curls than not. Exercise/commute: I'm trying to unlearn driving everywhere, and bike whenever I can. I just place my car keys in my bike's frame pouch. This way I cannot leave the house without touching my bike, and once I do, its easier to just hop on it and ride away. Diet: I always struggled with weight, and the one "simple trick" that actually worked for me was brushing my teeth ASAP after dinner. Since my teeth are already brushed, and it would be annoying to do so again, Im much less likely to snack after dinner. If the urge to snack is really strong, I just use some mouthwash, which not only makes me even more disinclined to soil my super-clean teeth, but no snacks taste good when my mouth is super minty/mentholly. Waking early: the path to a sub-optimal habit is to hit snooze on the alarm and go back to sleep. Breaking the habit was as easy as placing the alarm clock in the bathroom, so I would have to walk across the entire house to turn it off, and once I do, Im already where I need to be to brush my teeth and shave, so might as well do so. They reason why these are working is that all those habits are relatively weak, and a small tweak to how annoying would they be, means all the difference. Its basically weaponizing my own laziness/procrastination against itself. The goal is to make myself spend extra energy walking around and looking for things needed for my bad habits, and the things needed for the good habits to be always in my path.
Comments from 80% of bosses say they regret earlier return-to-office plans:
I have a new conspiracy theory on RTO in Big Tech. Much has been said about commercial real estate. But I think getting highly capable people to work hard for you is a difficult problem & something tech executives put a lot of thought into. If you don't pay people enough, someone else will. But if you pay them too much, they'll become financially independent. Mid six figures is an objectively life-changing amount of money that an engineer would be insane to walk away from & that few companies can match. But because it's contingent on living in places where the "working rich" are on the lower half of the housing market totem pole, housing will always place significant pressure even on an obscene income, and your home will never be quite satisfactory. It keeps you on the treadmill working harder and longer, worrying about the stock price, chasing the next promotion or refresher so that you can afford a little more light, a little more space, a little less commute time. And that's right where they want you. Location independence means people could actually get their fill of "TC" which would make it difficult to keep them sweating over such uninspiring projects.
From Paul Graham's How to Do Great Work:
The work you choose needs to have three qualities: it has to be something you have a natural aptitude for, that you have a deep interest in, and that offers scope to do great work.
There's a kind of excited curiosity that's both the engine and the rudder of great work. It will not only drive you, but if you let it have its way, will also show you what to work on. ... What are you excessively curious about — curious to a degree that would bore most other people? That's what you're looking for.
Four steps: choose a field, learn enough to get to the frontier, notice gaps, explore promising ones. This is how practically everyone who's done great work has done it, from painters to physicists. ... Steps two and four will require hard work. It may not be possible to prove that you have to work hard to do great things, but the empirical evidence is on the scale of the evidence for mortality. That's why it's essential to work on something you're deeply interested in. Interest will drive you to work harder than mere diligence ever could. ... The three most powerful motives are curiosity, delight, and the desire to do something impressive. Sometimes they converge, and that combination is the most powerful of all.
From Gwern's transcript of Hamming's You and Your Research:
“There are wavelengths that people cannot see, there are sounds that people cannot hear, and maybe computers have thoughts that people cannot think.”
One of the characteristics you see, and many people have it including great scientists, is that usually when they were young they had independent thoughts and had the courage to pursue them.
One of the characteristics of successful scientists is having courage. Once you get your courage up and believe that you can do important problems, then you can. If you think you can’t, almost surely you are not going to.
When you are famous it is hard to work on small problems. This is what did Shannon in. After information theory, what do you do for an encore? ... The great scientists often make this error. They fail to continue to plant the little acorns from which the mighty oak trees grow. They try to get the big thing right off. And that isn’t the way things go. So that is another reason why you find that when you get early recognition it seems to sterilize you.
“Knowledge and productivity are like compound interest.” Given two people of approximately the same ability and one person who works 10% more than the other, the latter will more than twice outproduce the former. The more you know, the more you learn; the more you learn, the more you can do; the more you can do, the more the opportunity—it is very much like compound interest. I don’t want to give you a rate, but it is a very high rate. Given two people with exactly the same ability, the one person who manages day in and day out to get in one more hour of thinking will be tremendously more productive over a lifetime.
the idea is that solid work, steadily applied, gets you surprisingly far. The steady application of effort with a little bit more work, intelligently applied is what does it.
When you find apparent flaws you’ve got to be sensitive and keep track of those things, and keep an eye out for how they can be explained or how the theory can be changed to fit them. Those are often the great contributions. ... It comes down to an emotional commitment. Most great scientists are completely committed to their problem. Those who don’t become committed seldom produce outstanding, first-class work.
You can’t always know exactly where to be, but you can keep active in places where something might happen. And even if you believe that great science is a matter of luck, you can stand on a mountain top where lightning strikes; you don’t have to hide in the valley where you’re safe. But the average scientist does routine safe work almost all the time and so he (or she) doesn’t produce much. ... It’s that simple. If you want to do great work, you clearly must work on important problems, and you should have an idea.
The great scientists, when an opportunity opens up, get after it and they pursue it. They drop all other things. They get rid of other things and they get after an idea because they had already thought the thing through. Their minds are prepared; they see the opportunity and they go after it. Now of course lots of times it doesn’t work out, but you don’t have to hit many of them to do some great science.
I notice that if you have the door to your office closed, you get more work done today and tomorrow, and you are more productive than most. But 10 years later somehow you don’t know quite know what problems are worth working on; all the hard work you do is sort of tangential in importance. He who works with the door open gets all kinds of interruptions, but he also occasionally gets clues as to what the world is and what might be important.
You should do your job in such a fashion that others can build on top of it, so they will indeed say, “Yes, I’ve stood on so-and-so’s shoulders and I saw further.”28 The essence of science is cumulative. By changing a problem slightly you can often do great work rather than merely good work. Instead of attacking isolated problems, I made the resolution that I would never again solve an isolated problem except as characteristic of a class.
But the fact is everyone is busy with their own work. You must present it so well that they will set aside what they are doing, look at what you’ve done, read it, and come back and say, “Yes, that was good.” ... You have to learn to write clearly and well so that people will read it, you must learn to give reasonably formal talks, and you also must learn to give informal talks.
The people who do great work with less ability but who are committed to it, get more done that those who have great skill and dabble in it, who work during the day and go home and do other things and come back and work the next day. They don’t have the deep commitment that is apparently necessary for really first-class work. They turn out lots of good work, but we were talking, remember, about first-class work. There is a difference. Good people, very talented people, almost always turn out good work. We’re talking about the outstanding work, the type of work that gets the Nobel Prize and gets recognition.
He had his personality defect of wanting total control and was not willing to recognize that you need the support of the system.
good scientists will fight the system rather than learn to work with the system and take advantage of all the system has to offer. It has a lot, if you learn how to use it. It takes patience, but you can learn how to use the system pretty well, and you can learn how to get around it. After all, if you want a decision ‘No’, you just go to your boss and get a ‘No’ easy. If you want to do something, don’t ask, do it. Present him with an accomplished fact. Don’t give him a chance to tell you ‘No’. But if you want a ‘No’, it’s easy to get a ‘No’.
It was a payoff for the times I had made an effort to cheer her up, tell her jokes and be friendly; it was that little extra work that later paid off for me. By realizing you have to use the system and studying how to get the system to do your work, you learn how to adapt the system to your desires. Or you can fight it steadily, as a small undeclared war, for the whole of your life.
I am not saying you shouldn’t make gestures of reform. I am saying that my study of able people is that they don’t get themselves committed to that kind of warfare. They play it a little bit and drop it and get on with their work. ... Many a second-rate fellow gets caught up in some little twitting of the system, and carries it through to warfare. He expends his energy in a foolish project. Now you are going to tell me that somebody has to change the system. I agree; somebody has to. Which do you want to be? The person who changes the system or the person who does first-class science? Which person is it that you want to be? Be clear, when you fight the system and struggle with it, what you are doing, how far to go out of amusement, and how much to waste your effort fighting the system. My advice is to let somebody else do it and you get on with becoming a first-class scientist. Very few of you have the ability to both reform the system and become a first-class scientist.
If you really want to be a first-class scientist you need to know yourself, your weaknesses, your strengths, and your bad faults, like my egotism. How can you convert a fault to an asset? How can you convert a situation where you haven’t got enough manpower to move into a direction when that’s exactly what you need to do?
If you want to think new thoughts that are different, then do what a lot of creative people do—get the problem reasonably clear and then refuse to look at any answers until you’ve thought the problem through carefully how you would do it, how you could slightly change the problem to be the correct one.
Electric car battery breakthroughs and being a bit skeptical given all the parameters it must do well on (source):
It seems there's news of a battery breakthrough every week. I've learned to temper expectations, because so many "breakthroughs" turn out to be dead ends. Because it's not enough for a battery to be incredibly light, or made of abundant materials, or last for ten thousand cycles. It needs to be good at many things and at least okay at most things. E.g.— • How much capacity per dollar? • How much capacity per kilogram? • How much capacity per litre? • How quickly can it be charged? • How quickly can it be discharged? • How much energy is lost between charging and discharging? • How predisposed is it to catching fire? • How available are the materials needed to manufacture it? • How available are the tools/skills required to manufacture it? • How resilient is it to mechanical stress, e.g. vibration? • How much does performance degrade per cycle? • How much does performance degrade when stored at a high state of charge? • How much does performance degrade when stored at a low state of charge? • How much does performance drop at high temperatures? • How much does performance drop at low temperatures? • How well can it be recycled at end-of-life? A sufficiently bad answer for any one of these could utterly exclude it from contention as an EV battery. A battery which scores well on everything except mechanical resilience is a non-starter, for example. Though it might be great for stationary storage. I'm only a layperson and this list is what I came up with just a few minutes of layperson thought. I'm sure someone with more familiarity with battery technology could double the length of this list. But the point is, when you daydream about some hypothetical future battery tech, you need to appreciate just how well today's lithium chemistries score in so many areas.
From Gwern's notes on Range of Human Capacities:
Wechsler found that the ratio of the smallest to biggest, or best to worst, in just about any measure of humanity, from high jumping to hosiery looping [knitting], was between 2 to one and 3 to one. To Wechsler, the ratio appeared so consistent that he suggested it as a kind of universal rule of thumb.
combed the world’s skill-acquisition studies in an effort to determine whether practice makes equal, and his conclusion is that it depends on the task. In simple tasks, practice brings people closer together, but in complex ones, it often pulls them apart. Ackerman has designed computer simulations used to test air traffic controllers, and he says that people converge on a similar skill level with practice on the easy tasks—like clicking buttons to get planes to take off in order—but for the more complex simulations that are used for real-life controllers, “the individual differences go up”, he says, not down, with practice. In other words, there’s a Matthew effect on skill acquisition.