→Sick: The default position is not to give free medical advice, nor comments without links or references |
|||
Line 456: | Line 456: | ||
== Sick == |
== Sick == |
||
{{hat|close request for medical advice}} |
|||
I'm sixteen and has only been sick 4 times in the last 10 times, and there has always only lasted a few days, and intervals of several years. The rest, of my family is just as often sick as usual. Someone who has an idea why? -- [[80.161.143.239]] 20:52, 30 January 2013 (UTC) |
I'm sixteen and has only been sick 4 times in the last 10 times, and there has always only lasted a few days, and intervals of several years. The rest, of my family is just as often sick as usual. Someone who has an idea why? -- [[80.161.143.239]] 20:52, 30 January 2013 (UTC) |
||
Line 462: | Line 462: | ||
:Genetic variation causes individuals, even in a family, to have different immunity to each disease. Also, general health will affect how susceptible each individual is. Those with poor diets, little exercise, and with other medical conditions, will be more susceptible to disease than healthy people. Finally, while you might assume that you and the other members of your family are all exposed to the same diseases, this isn't necessarily true. Some diseases are only spread by types of contact that can be avoided. Young children tend to be far sloppier with hygiene, and thus can get diseases that adults usually don't, like intestinal worms. [[User:StuRat|StuRat]] ([[User talk:StuRat|talk]]) 22:55, 30 January 2013 (UTC) |
:Genetic variation causes individuals, even in a family, to have different immunity to each disease. Also, general health will affect how susceptible each individual is. Those with poor diets, little exercise, and with other medical conditions, will be more susceptible to disease than healthy people. Finally, while you might assume that you and the other members of your family are all exposed to the same diseases, this isn't necessarily true. Some diseases are only spread by types of contact that can be avoided. Young children tend to be far sloppier with hygiene, and thus can get diseases that adults usually don't, like intestinal worms. [[User:StuRat|StuRat]] ([[User talk:StuRat|talk]]) 22:55, 30 January 2013 (UTC) |
||
Please seek medical attention. The OP should do the same. [[User:Medeis|μηδείς]] ([[User talk:Medeis|talk]]) 03:57, 31 January 2013 (UTC) |
|||
{{hab}} |
|||
== 13.7 billion years == |
== 13.7 billion years == |
Revision as of 03:57, 31 January 2013
of the Wikipedia reference desk.
Main page: Help searching Wikipedia
How can I get my question answered?
- Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
- Post your question to only one section, providing a short header that gives the topic of your question.
- Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
- Don't post personal contact information – it will be removed. Any answers will be provided here.
- Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
- Note:
- We don't answer (and may remove) questions that require medical diagnosis or legal advice.
- We don't answer requests for opinions, predictions or debate.
- We don't do your homework for you, though we'll help you past the stuck point.
- We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.
How do I answer a question?
Main page: Wikipedia:Reference desk/Guidelines
- The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
January 26
Opel truck
What's the maximum number of people that can squeeze into an Opel Blitz truck? Thanks in advance! 24.23.196.85 (talk) 01:38, 26 January 2013 (UTC)
- The German article (de:Opel_Blitz) has technical specifications, including maximum weight, and sizes, for various truck models. Those should help you set some reasonable bounds. Nimur (talk) 01:45, 26 January 2013 (UTC)
- It has no specs for any of the WW2 models -- and those are the ones I want to know about. 24.23.196.85 (talk) 02:03, 26 January 2013 (UTC)
- If you're wondering about the ability to move soldiers in WWII, the answer is "how desperate are you?". If you're not worried about things like "safety" or "keeping the truck working next week", you can fit a rather incredible number of people on one truck. On the other hand, for routine operations, a good rule of thumb is that one truck can carry one squad of soldiers with full equipment. --Carnildo (talk) 02:17, 26 January 2013 (UTC)
- Thanks!
And would 12 soldiers be a squad?The article says it would be. 24.23.196.85 (talk) 03:42, 26 January 2013 (UTC) - Its supply carrying capacity is more important than its infantry carrying capacity. 3 tons. There are space for 10 seated facing each other in the back, from seeing one.--89.101.197.30 (talk) 14:25, 26 January 2013 (UTC)
- Thanks!
- Echoing 89's comment, supply capacity really is a far more interesting number than troop capacity. The WW2 German Army was, on the whole, very poorly mechanized. Our Horses in WW2 article notes that a standard German infantry division had about 250 trucks (and 2,500 horses) for about 20,000 troops. While you can put troops in those trucks, you obviously can't put anywhere near all of them in, and that leaves your division moving at foot speed -- de-emphasizing the troop-carrying capabilities of the truck. The Panzergrenadier divisions were well-mechanized (and far more closely resembled the majority of British and American infantry units in that regard), but there weren't many of them. — Lomn 15:54, 26 January 2013 (UTC)
- For me personally, troop-carrying capacity is more interesting -- my actual question was whether a total of 12 French Resistance fighters could fit into an Opel Blitz, and whether there would be any room left for a few concentration camp escapees to share space with the dozen Maquis on board. 24.23.196.85 (talk) 04:55, 27 January 2013 (UTC)
- I'm sure that in an extreme need, you could fit that many people into the truck. It wouldn't be comfortable - but it's clearly possible. The thing about vehicles of that era is that they typically had very low horsepower compared to modern vehicles. That German article says that the 1945 version produced 73.5 hp and weighed 5,500lb. Compare that to a comparably heavy, modern Ford Explorer (280hp, 5,000lb!) - and you can see that despite weighing a little more, the 1945 Opel has less than a quarter of the power-to-weight ratio of a modern 'people carrier'. Put 15 guys in there (let's say 150lb each - maybe 200lb with weapons & packs...3,000lb total) and the truck now has maybe a sixth of the power-to-weight of a modern truck. The German wikipedia article says it has a maximum payload of 7,200lb - so the frame and suspension could easily manage even 30 people. Of course your Opel must be a yet older model - so I'd expect similar weight but even less horsepower...but it would be nice to know those two figures to be sure.
- But with such an awful power-to-weight ratio, instead of 0-60mph in 12 seconds(ish) in an Explorer - your acceleration in a heavily loaded 1945 Opel is going to be 6 times worse! In reality, it wouldn't ever reach 60mph - but if it could, it would take a minute and a quarter with foot mashed to floor to get there! So what would happen would be that the thing would have very poor performance - you'd probably have to drive it in a lower gear, keep your speed around 30mph and go up hills at walking pace in 1st gear - and you might have to have the guys get out and push on a really steep road...but that's how trucks were driven back then. But in low gear, I'm sure it could manage. If the people would physically fit in there - I'm sure it would carry them OK. In times of great pressure, I'm sure our brave Maquis would be happy to hang on the outside - lay on the roof, etc. SteveBaker (talk) 16:21, 27 January 2013 (UTC)
- You can seat two guys in the cab, ten in rear seats, and anyone else has to squaton the floor between the seats in the back. Of course, at slow speed, you could cram 20+ guys in the back, three or four in the cab and 10+ hanging on to the outside. There are some funny world war ii pictures of this type of thing around.--89.101.197.30 (talk) 20:54, 27 January 2013 (UTC)
- So, 20-25 people would fit in just fine, at the expense of greatly reduced speed and acceleration (which is fine by me, and can actually help to build dramatic tension) -- is that right? 24.23.196.85 (talk) 00:41, 28 January 2013 (UTC)
- It the 1945 truck specifications aren't too different from the pre-1945 version, then yes. To be clear about the performance thing: On a smooth, level road, top speed is determined overwhelmingly by atmospheric drag...but acceleration is all about mass - so in the best conditions, the overloaded truck might be able to get up to a reasonable speed eventually. However, on a rough road or when going uphill, the mass is again important. Another thought is that those old vehicles had really bad brakes - and deceleration (just like acceleration) is determined mostly by the amount of weight it's hauling around. So if you ever did get it going fast, it would be really poor at braking. If you make the thing too top-heavy, there might also be a roll-over risk when cornering hard...the ability to corner fast also depends greatly on the weight of the thing. SteveBaker (talk) 13:08, 28 January 2013 (UTC)
- Thanks, I'll keep all this in mind. FYI, the scene I have in mind takes place during an escape from Natzweiler-Struthof, and I'm pretty familiar with the terrain there -- the ground rises quite a bit when going away from the camp (with a corresponding fall on the other side of the hill), and the road is quite winding, though the terrain does level off farther out. So all these factors can make the difference between getting away clean or crashing and getting recaptured -- perfect for my purposes (bwahahahaha!) 24.23.196.85 (talk) 06:00, 29 January 2013 (UTC)
- It the 1945 truck specifications aren't too different from the pre-1945 version, then yes. To be clear about the performance thing: On a smooth, level road, top speed is determined overwhelmingly by atmospheric drag...but acceleration is all about mass - so in the best conditions, the overloaded truck might be able to get up to a reasonable speed eventually. However, on a rough road or when going uphill, the mass is again important. Another thought is that those old vehicles had really bad brakes - and deceleration (just like acceleration) is determined mostly by the amount of weight it's hauling around. So if you ever did get it going fast, it would be really poor at braking. If you make the thing too top-heavy, there might also be a roll-over risk when cornering hard...the ability to corner fast also depends greatly on the weight of the thing. SteveBaker (talk) 13:08, 28 January 2013 (UTC)
- So, 20-25 people would fit in just fine, at the expense of greatly reduced speed and acceleration (which is fine by me, and can actually help to build dramatic tension) -- is that right? 24.23.196.85 (talk) 00:41, 28 January 2013 (UTC)
Nutrition facts of cooked chicken
Wikipedia article on Chicken (food) and Nutritiondata have different data. Our article says 100 g of cooked chicken contain 26 g protein, Nutritiondata says the figure is 31 g. From where will I get a reliable and accurate nutrition information? --PlanetEditor (talk) 04:48, 26 January 2013 (UTC)
- Those are close enough that both could be in the normal range. The breed of chicken, how the chicken is raised and cooked, whether it's white meat or dark, whether the skin is removed, etc., could all account for the difference. StuRat (talk) 05:03, 26 January 2013 (UTC)
- What StuRat said. Checking the website that is the source for the data in the Wikipedia article has hundreds of different variations on cooked chicken, and it isn't readily apparent which specific entry the Wikipedia data is drawn from, but given the variation likely, it doesn't seem impossible that both sets of data are correct, for any given value of "chicken". --Jayron32 05:07, 26 January 2013 (UTC)
- (edit conflict)Not to mention that two chickens will not necessarily be identical, so the amount of protein in them would be different (for example, if you were more muscular than I, you'd likely have more protein). Those numbers are from different sources, which likely took the mean of the protein content of many chickens, so if different chickens were used to calculate the numbers, you'd get different values. Brambleclawx 05:09, 26 January 2013 (UTC)
- the nutritiondata page specifies Chicken, broilers or fryers, breast, meat only, cooked, roasted, the usda page which is the source for the wikipedia table says 28.93 gm for Chicken, broilers or fryers, meat only, roasted; that reduces the difference to 7% (don't know which particular chicken entry of the multiple varieties given in the usda page was the source for the 26). Considering that the nutritiondata number is only precise to 0 decimal points, i.e. 3%.... as we used to say in biology lab, "within 10% means it's the same". Gzuckier (talk) 07:32, 26 January 2013 (UTC)
- Here's an entry from NutritionData.com which seems to match our Wikipedia article: [1]. However, our article giving the protein value down to .01 gram seems a bit silly. Perhaps technical limitations prevent rounding different nutrients differently in the same table, or they just thought everything should be rounded the same as a style issue. StuRat (talk) 05:17, 26 January 2013 (UTC)
- Thanks everyone. Another question, which part of chicken contains maximum amount of protein? --PlanetEditor (talk) 05:31, 26 January 2013 (UTC)
- Looking through the NutritionData.com charts, it looks like the breast. I'd guess that's because other parts, like the back, wings and thighs, contain a larger percentage, by weight, of bone. They probably should compare the various pieces without the bones, to remove this bias. StuRat (talk) 05:54, 26 January 2013 (UTC)
- Not just formed by bone, but indeed chicken wings, thighs, and legs, all contain a significantly higher proportion of fat (in the edible parts) than chicken breast. Source - those nutrition labels they force them to stick on the packages in the UK. Whether the difference is made up mostly by protein or not, I wouldn't say for certain, but I would guess it's likely. --Demiurge1000 (talk) 06:22, 26 January 2013 (UTC)
- It says 64 grams of cooked chicken is water. Slight differences in meat dryness probably change protein per 100 g alot. Also, don't go overboard with the protein eating, it is possible to eat too much. Sagittarian Milky Way (talk) 03:53, 28 January 2013 (UTC)
Rain = snow depth
What would be the equivalent snow depth of 2 inches of rain? In other words, if 2 inches of rain fell as snow, how deep would the snow be? --TammyMoet (talk) 17:31, 26 January 2013 (UTC)
- It depends on how wet/dry the snow is. 10 inches of snow to 1 inch of water is the common rough estimate. — Lomn 17:41, 26 January 2013 (UTC)
- (ec) As noted in snow gauge, the 'rule of thumb' ratio often used in comparing rainfall to snowfall is 1:10—that is, one inch of rain is roughly the same amount of precipitation as ten inches of snow. (In reality, however, a whole bunch of ambient factors affect the density of snow. 3 to 5 inches of really dense, 'wet' snow may melt down to an inch of water, whereas it might take 30 or more inches of really light, fluffy snow.) TenOfAllTrades(talk) 17:43, 26 January 2013 (UTC)
- Also note that snow compresses after it falls. So, it will become steadily more dense, the longer it sits on the ground, eventually becoming ice, with almost the same density as water. This is especially true of snow which is compressed by the weight of snow above it, or where people or cars pass above it. StuRat (talk) 17:53, 26 January 2013 (UTC)
- Cold snow is drier, due to absolute humidity. Sagittarian Milky Way (talk) 18:14, 26 January 2013 (UTC)
Thank you all! (Currently breathing a sigh of relief that the rain we had last night wasn't snow.) --TammyMoet (talk) 20:08, 26 January 2013 (UTC)
- Note that you generally get far less moisture falling as snow than as rain, since you only get snow when it's cold, and cold air holds much less moisture to begin with. However, the snow might seem like more, both because it takes up more room, and because it sticks around, versus rain which usually goes down the nearest drain, soaks into the ground, or evaporates, in short order. StuRat (talk) 20:31, 26 January 2013 (UTC)
- See Snow#Density which has some references. And no, snow, of a single season, does not become ice. Snow that does not melt in the summer may eventually turn into glaciers. If the snow has become ice in a single season then it is due to partial melting and later refreezing. CambridgeBayWeather (talk) 19:14, 27 January 2013 (UTC)
- In fact, during the Battle of the Bulge, it only took one day of tanks driving over snow-covered roads to turn the snow into ice -- despite constant plowing, salting, etc. 24.23.196.85 (talk) 07:12, 28 January 2013 (UTC)
- In my immediate experience ten days ago (on the South coast of England), a day of lighter-than-usual pedestrian traffic on town pavements ('sidewalks') can be sufficient to turn 3 or so inches of snow into uneven ice. I walked most of 5 miles into work (buses and other vehicles proving unable to cope on inclines) in the near-virgin snow quite easily, but walking a mile to the railway station that evening was far more difficult. {The poster formerly known as 87.81.230.195} 84.21.143.150 (talk) 14:45, 28 January 2013 (UTC)
- In fact, during the Battle of the Bulge, it only took one day of tanks driving over snow-covered roads to turn the snow into ice -- despite constant plowing, salting, etc. 24.23.196.85 (talk) 07:12, 28 January 2013 (UTC)
- I'm talking about paved roads in Yellowkinfe after months of vehicles passing over them. CambridgeBayWeather (talk) 14:47, 28 January 2013 (UTC)
- Perhaps the low temperatures in Yellowknife prevent the snow from melting and refreezing as tires compress it, and thus delay ice formation there. StuRat (talk) 07:54, 29 January 2013 (UTC)
Formal term for re-mapping of a set of values to other values.
Hi,
Please help me find the above technical term.
Quantisation is the proces of remapping values to a smaller set of values (many-to-few), but what is the term where the number of values in the set remain the same, but the data points change value? (edited , sorry the following is an example of the sought after term, not quantisation, so the example is:)
There data points at the start are: 1, 3, 5, 7, 9. Some of the data points values are changed by some algorthym or process, or so that they align with some other value set, and the resultant values are 1.5, 3, 5, 6, 8 respectively.
This is not quantisation, as the number of data points has not been reduced. Only the values of each data point have been realigned to a new set of values.
What term could describe this?
Thanks, Dale. — Preceding unsigned comment added by Califauna (talk • contribs) 20:16, 26 January 2013 (UTC)
- Not an an answer, but the first case, where you remap data to a smaller set, is known as "hashing", at least in computer science. In the second case, I'd simply call it "remapping". StuRat (talk) 20:27, 26 January 2013 (UTC)
- Yes, although when applied to data points I think the term transformation is more widely used. Looie496 (talk) 01:03, 27 January 2013 (UTC)
Thanks for the above. All excellent. — Preceding unsigned comment added by Califauna (talk • contribs) 03:58, 27 January 2013 (UTC)
January 27
Does the dog recognize me?
I have a business associate whom, for the past 2 years, I meet with exactly once a month, and not more. About half the times I meet him, he has his dog with him. I always greet the dog, and the dog nuzzles me and licks my hand (the dog does this with everyone it meets). So this is a dog whom I see on a roughly regular basis once every 2 months (on average) for just a brief moment over the past 2 years. I'm just curious: does the dog recognize me from these short 1 to 2 minute encounters, or am I just some random stranger to the dog every time? Put another way, if I happened to run into this dog at random in a park without its owner, would the dog recognize me? I know that the reverse is not true (if I ran into this dog at random in the park, I could not be certain that it was this dog and not another dog of the same breed and coloring). —SeekingAnswers (reply) 01:00, 27 January 2013 (UTC)
- If you asked this about a person who has encountered you twice a month for the past two years (say, a waiter), it would be impossible to say, right? Well, it's no more possible to say for the dog. They are very good at distinguishing people by smell, but unless the dog shows some overt sign of recognition, some behavior that distinguishes you from other people, there can't be a solid answer. Looie496 (talk) 01:09, 27 January 2013 (UTC)
- It would be a pretty poor waiter who would not recognise a regular customer. In a quality restaurant, waiters will check the bookings to refresh their memory, so that when you arrive roughly on time, they can say something "So nice to see you again, Mr StuRat", while faking their sincerity with a big smile. However, as far as the dog is concerned, he might or he might not - it depends on how different you are, in the dog's perception, to other humans he meets. Some humans will greet a dog, some won't, so just greeting it probably won't be enough. If your business associate treats you differently to other people, the dog might cue on that, but it seems unlikely. It's easier to tell with certain breeds of dog that have clearly different sorts of barks for different purposes. I visit my cousin about every 2 to 3 months or so. She has a labrador. It barks (once) whenever it detects someone comming to the door. If it is someone in my cousin's immediate family, in other words poeople who are there nearly every week, the dog announces them with a certain soft bark. If it is a stranger it gives a louder more harsh bark. If the dog knows the person, it gives another sort of bark - when I arrive it gives that sort of bark. My cousin calls it the "family bark". Incidentally I pretty much ignore the dog. Another way you can tell if a dog recognises you as friend, is if he approaches you with ears lowered (sometimes ears and head lowered), and walks 360 degrees around you. The more intelligent breeds do this. Poodles are useless - they just bark and don't use dog greeting etiquete - which is why other breeds of dog despise them. Wickwack 121.215.68.88 (talk) 03:13, 27 January 2013 (UTC)
- It would be strange to the point of pathology if the dog didn't recognize your smell the second time he met you. That's the whole reason for the smell the hand greeting---to get to know your smell--not to check if you have bacon between your fingers. There are too many books on dogs, like Cziksentmihly's and The Dog Whisperer's. You might try Temple Grandin's works. And animal intelligence and dog behavior. But the answer is yes. μηδείς (talk) 03:36, 27 January 2013 (UTC)
- I would bet that the dog recognizes you. I have five dogs, so this throws off my example a bit but hear me out. One of the dogs is more... hostile, I'll use that word for now, to strangers than the rest. So we have a method of introducing him to new people who we want him to trust in his house. We don't get a lot of guests and so when we do, they've normally been away for some time. Long enough to be similar in scope to your example. When those people come over, we don't have to go through the introduction every time. Between the members of my pack, they somehow know who is okay to let in and who isn't, which is handy when we need someone to feed the pack on some day that we might be gone. Dismas|(talk) 04:05, 27 January 2013 (UTC)
- Yes, and by recognize all that may be meant is an association of your smell with the tag friend or foe. It's not like they have to remember names, phone numbers, etc. Olfaction is powerful and primitive. μηδείς (talk) 04:18, 27 January 2013 (UTC)
- Yeah, the recognition of individuals is really a different thing from memorizing the 12 times table. I'd say the best estimate for the average ability and amount of variation of dogs in recognizing people is that it's equal to the ability of humans recognizing individual dogs. I.e., it's probably easier for either to recognize members of their own species, but there are still plenty of cues provided by individuals of a different mammalian species. Gzuckier (talk) 06:11, 27 January 2013 (UTC)
- Humans, unless they are dog trainers or the like, are definitely not as good at recognizing and remembering dogs as dogs are at recognizing humans. For most people, other people's dogs are simply not an important aspect of their lives. For a dog, relatively little is more important than those other dogs and people it has met directly. Humans are distracted by the abstract, the conceptual, the verbal, the alpha-numeric, the future. Dogs are here now in the percept and the concrete existent. Humans have to be told to wake up to smell things. μηδείς (talk) 20:44, 28 January 2013 (UTC)
- Yeah, the recognition of individuals is really a different thing from memorizing the 12 times table. I'd say the best estimate for the average ability and amount of variation of dogs in recognizing people is that it's equal to the ability of humans recognizing individual dogs. I.e., it's probably easier for either to recognize members of their own species, but there are still plenty of cues provided by individuals of a different mammalian species. Gzuckier (talk) 06:11, 27 January 2013 (UTC)
- Dogs are wolf-descendent pack animals - and their eyesight isn't that great. They mostly rely on their incredible sense of smell to recognise pack members and bond with them. That's why they sniff friends and strangers alike. So I think it's reasonable to assume that the dog recognises you - but there is no easy way to know whether it cares or not. So it could be: "<sniff>...not a pack member...<Meh>"...or "<sniff>...oh yeah...that guy who was nice to me a couple of weeks ago <happy>". In my experience, a dog never forgets a free snack...so if you're prepared (and the owner doesn't mind) with a small doggy treat every time you meet the dog - I'm 100% sure you'll get a wild greeting each time! SteveBaker (talk) 15:52, 27 January 2013 (UTC)
- Our article Gray_wolf#Intelligence says that a trained wolf recognized its master after a three-year absence. (I haven't looked into this further) Wnt (talk) 19:08, 27 January 2013 (UTC)
Interchangeable parts?
I recently got a new HP laptop and was wondering if I ought not use the AC charger from my previous Dell for my new HP. I read the details of the AC chargers from their backs and they both say 90V 90W and everything else seems to be pretty much the same, and the male inserts for both the HP and the old Dell charging appear identical and both fit into the female port on my new HP. As an aside, when I say "old Dell" I don't mean that it's 15 years old, but rather perhaps 5 years old. DRosenbach (Talk | Contribs) 01:08, 27 January 2013 (UTC)
- maybe ;/ - assuming the voltage output (not input!)/wattage/polarity of connector/size of connector/AC or DC/ is identical, then possibly. however some newer models have special circuitry that can identify the charger that has been plugged in to
keep a nice tight grip on the market for replacementsprevent damage to the laptop. Output voltage is very unlikely to be 90v - usually somewhere between 9 and 19 volts - and this will be critical to make sure you have correctly identified before you plug it in. 90w of power seems plausible. There are so many variables, having had a number of laptops over the years, I've never seen one that was interchangeable with the charger of another manuf yet. If you plug it in and it wrecks your laptop, it won't be covered by the warranty. ---- nonsense ferret 01:18, 27 January 2013 (UTC)- Yes -- you got me. It was 90W, not 90V. But it does say V85 on one of them. DRosenbach (Talk | Contribs) 04:02, 27 January 2013 (UTC)
- maybe ;/ - assuming the voltage output (not input!)/wattage/polarity of connector/size of connector/AC or DC/ is identical, then possibly. however some newer models have special circuitry that can identify the charger that has been plugged in to
- There's probably other ways to invalidate the warranty if you really wanted to, but your method should do exactly that just fine. --Jayron32 01:19, 27 January 2013 (UTC)
- If both input and output voltage are the same and the connectors are the same, they should be interchangeable, and it should be okay swapping them. —SeekingAnswers (reply) 01:24, 27 January 2013 (UTC)
- If the power and voltage output match and the connectors fit, than you should be OK. That will not invalidate the warranty (even if it did, how would they know?). Dauto (talk) 02:12, 27 January 2013 (UTC)
Electronics Engineer here: Please post exactly what each power supply says on it, including a description of any graphics that have a "+" "-", "~" or "- - - - -" as part of the diagram. "Pretty much the same" is not good enough to risk your laptop over. --Guy Macon (talk) 03:30, 27 January 2013 (UTC)
- OP here -- OK, I didn't realize what was necessary to make them compatible. The new HP AC charger reads "Output: 19.5V --- 4.62A" and the old Dell AC charger reads the same. The only difference is that the old Dell reads "Input: 100-240 V ~ 1.5A 50-60Hz" and the new HP reads "Input: 100-240V ~ 1.6A 50-60Hz." Other than the difference between 1.5A and 1.6A, the only other difference is that the new HP includes the term "wide range input" whereas the old Dell charger does not. DRosenbach (Talk | Contribs) 03:38, 27 January 2013 (UTC)
- Your original post said "both say 90V". Do they still both say this, or neither of them still say it, or one doesn't? --Demiurge1000 (talk) 03:43, 27 January 2013 (UTC)
- Ha!
Yes, they still both state everything I've already included above -- this last post was merely adding because someone made a comment about specifics regarding input and output. They are both 90V chargers.One of them says V85. I'm lost because I don't know the difference between watts, volts or amps. Here's a photo of the two. DRosenbach (Talk | Contribs) 03:48, 27 January 2013 (UTC)
- Ha!
- Your original post said "both say 90V". Do they still both say this, or neither of them still say it, or one doesn't? --Demiurge1000 (talk) 03:43, 27 January 2013 (UTC)
- This is a good thing to give knowledge about, but please note that the Wikipedia Refdesk does not give advice, and will not be offering compensation should your toasted laptop be found in the ruins of your burned-out house. Telling people sources about how laptop connectors are identified and categorized, of course, is within the purview. Also note that the Computing Refdesk might have additional experts. Wnt (talk) 03:40, 27 January 2013 (UTC)
- Now that I see the stats, I should follow up what I was saying more specifically: is it possible that the greater power consumption (1.6 A) will mean there's an increased risk that the old laptop adaptor will overheat and catch itself or the surface it's lying on on fire? Wnt (talk) 03:50, 27 January 2013 (UTC)
- I agree with the previous respondents - there is a lamentable lack of standardization between laptops - so you have to use an abundance of caution. Even if the voltage, current, frequency and everything else are the same - and the connector physically fits, there is also the issue of whether the inner pin is positive or negative...generally there is a little diagram that shows you that. If you're 100% sure they are identical - then it should work - but it's a definite risk, so if you aren't 100% sure, then don't do it. I'd want to check at least voltage and polarity with a meter before plugging it in. SteveBaker (talk) 03:44, 27 January 2013 (UTC)
- The above advice is spot on. It was pretty much what I would have started my answer with before posting the following:
- The "Output: 19.5V --- 4.62A" is your DC output voltage and current. Same is good, but having the plus and minus reversed would be very bad. Most power supplies tell you which goes where. For example you will see a little circle inside a circle with one or the other labeled "+" or perhaps a dashed line above a solid line. You didn't tell me about any of those, so I am not 100% sure that they are not reversed. One solution would be to find a friend who owns a voltmeter and knows how to use it and have him check the supply.
- "Input: 100-240 V ~ " is the input voltage. Like most modern laptop power supplies it runs from pretty much any voltage (Japan is 100V, parts of Europe are 240V). The "~" means "AC".
- 1.5A or 1.6A is the input current, but that's the highest it can get (laptop using the full 4.62A and input at 100VAC). It will usually be a lot lower. The 1.5 and 1.6 are almost certainly just slight variations on those assumptions and can be ignored. "50-60Hz" is input frequency (US is 60, Europe is 50)
- If it were my laptop and I was sure about plus and minus not being switched, I would try powering up my old laptop with the new power supply first (less valuable) and keep my hand on the supply to see if it is getting hot. If I didn't have the old one any longer, I personally would try it on the new, but as was detailed above, that's your decision, and you know what they say about following advice you got from strangers on the Internet...
- EDIT: after writing the above but before posting, I saw the pictures. There is that circle in a circle I was talking about, and they match. In my opinion, this is less risky than using a no-name replacement power supply that claims to be compatible, but again, it is your decision to make. --Guy Macon (talk) 04:20, 27 January 2013 (UTC)
- Thanks! DRosenbach (Talk | Contribs) 04:26, 27 January 2013 (UTC)
- EDIT: after writing the above but before posting, I saw the pictures. There is that circle in a circle I was talking about, and they match. In my opinion, this is less risky than using a no-name replacement power supply that claims to be compatible, but again, it is your decision to make. --Guy Macon (talk) 04:20, 27 January 2013 (UTC)
- Getting here late, but as somebody who started out by destroying tube type radios and just recently melted a 9 volt wall wart by pulling too much current, I'd say that based on the two pictures above the two power supplies are absolutely interchangable. I wouldn't be surprise if a lot of the ICs inside were the same. The V85 means nothing, that's just some manufacturer's particular ID; the .1 Amp difference on the input ratings is unimportant, given that the minimum wall outlet has 15 amps to offer, and is just as likely to be a difference from one sample of the same item to another as between the two manufacturers. What's important regarding meltdown of the power supply is the output Amp rating, and that's the same; as the above poster implied, given that both manufacturers can be considered reliable, it's likely that both supplies meet their specs. Gzuckier (talk) 06:02, 27 January 2013 (UTC)
- Agreed, I'd be willing to take the risk with my own laptop (after having made all the checks above). Most of the control circuitry is within the laptop, so it just needs an appropriate voltage and current. I have seen two pieces of expensive equipment ruined by plugging in the wrong power supply (even thought the plugs were a perfect fit), so it is wise to check very carefully before "trying out". (And, as mentioned above, none of us can be sued if we are all wrong!) Dbfirs 09:18, 27 January 2013 (UTC)
- Thanks to all who contributed. I'm posting this with my Dell working off of the new HP charger and so far, there's been no fire. DRosenbach (Talk | Contribs) 18:22, 27 January 2013 (UTC)
- Agreed, I'd be willing to take the risk with my own laptop (after having made all the checks above). Most of the control circuitry is within the laptop, so it just needs an appropriate voltage and current. I have seen two pieces of expensive equipment ruined by plugging in the wrong power supply (even thought the plugs were a perfect fit), so it is wise to check very carefully before "trying out". (And, as mentioned above, none of us can be sued if we are all wrong!) Dbfirs 09:18, 27 January 2013 (UTC)
Wooden coffer
I have a small antique wooden coffer that I need to reinforce. The problem is that the bottom panel is simply nailed in, and I fear that it is not designed to support 70 kg of weight. It measures ~36 x ~27 x ~17 cm, and is constructed from ~13 mm thick wood. How should I reinforce the chest in a way that preserves its 1910 style. Pehaps iron straps? Plasmic Physics (talk) 06:58, 27 January 2013 (UTC)
- Iron straps (with big, protruding rivets) would indeed be best -- two L-section straps, one at each end, should be able to hold the weight. 24.23.196.85 (talk) 07:29, 27 January 2013 (UTC)
- As a note, the iron fixtures, such as the hinges are attached via flat head, slot screws. Plasmic Physics (talk) 07:44, 27 January 2013 (UTC)
- In that case, matching screws would be better stylistically (not to mention more practical to install). 24.23.196.85 (talk) 07:57, 27 January 2013 (UTC)
- Wouldn't 4 smaller straps, two a side, distribute the load more evenly? Or is there a reason for only using two? Plasmic Physics (talk) 08:16, 27 January 2013 (UTC)
- Yes, four straps is also an option -- two is just the minimum. 24.23.196.85 (talk) 00:25, 28 January 2013 (UTC)
- If it's antique - I'd want to do a non-destructive "fix" - or at least one that didn't affect the outside appearance of the thing. So how about a box-within-a-box? Construct a strong container (from metal, plastic, heavier-grade wood, whatever) that fits tightly inside the original box but which can better take the weight and connect to whatever is going to be used to lift it. The antique becomes more of a decorative 'skin' around the "real" box which does all of the work of containing and distributing the load. SteveBaker (talk) 15:42, 27 January 2013 (UTC)
- Along that train of thought (that any alteration would reduce it's value as an antique), another option would be to support the bottom externally. I'm assuming here that it has legs which slightly lift the bottom. In this case, you could place, underneath it, plywood of the proper thickness and cut to the same width and length as the bottom, so it would be supported. However, you'd need to be careful never to lift the coffer while loaded. If you can do that, you can both use it, and preserve it's value, in this manner. StuRat (talk) 17:50, 27 January 2013 (UTC)
- It's not particularly valueable as an antique, is has more of a sentimental value. It doesn't have legs, the whole idea is to make it liftable when loaded. I need to stop the bottom panel from either cracking, or the nails pulling out. Plasmic Physics (talk) 00:09, 28 January 2013 (UTC)
- There is a problem with using a box within a box: the internal box is still resting on the bottom, so you haven't really changed anything. Plasmic Physics (talk) 00:09, 28 January 2013 (UTC)
- You mention 70 kg of weight. Is this 70 kg of weight that you are planning on putting into the box? Does it look like any of the "wooden coffers" when doing a Google image search for "wooden coffer"? Bus stop (talk) 00:23, 28 January 2013 (UTC)
- Yes, I'm planning to put it in the box. Putting it on the box seems a bit absurd as to defeat its purpose. No, those coffers are too ornate. My dad told me that as far as he knows, it was made by his grandfather, although it looks suspiciously like a Boer ammunition box that's received a coat of wood stain. Plasmic Physics (talk) 01:08, 28 January 2013 (UTC)
- Yes, it is. Plasmic Physics (talk) 02:19, 28 January 2013 (UTC)
- I guess one consideration is replacing the bottom panel, perhaps with strong plywood, such as aircraft plywood. Perhaps screws will hold sufficiently well, depending on the side panels to which they would have to be attached. Retaining the original appearance would depend on choice of materials obviously. The metal straps idea sounds sound. Bus stop (talk) 03:55, 28 January 2013 (UTC)
- Another thought. Metal sheathing could be secured all around the bottom. If the thin metal sheathing were cut to length and bent lengthwise at a right angle, it could probably then be tacked in place with a sequence of relatively small nails or tacks, relatively closely spaced. Four such sheaths might reinforce the four angles at which the sides of the box adjoin the bottom of the box. Locating the materials as well as the tools for working the materials may present an initial challenge but it may be doable and worth it. I would think the sheet metal used need not be particularly strong as its functionality would be continuous along all edges. Whether to replace the bottom or not would of course depend on the strength of the original bottom. Bus stop (talk) 04:21, 28 January 2013 (UTC)
- Some interesting images related to this under Google image search wooden shipping crate reinforced. Bus stop (talk) 20:05, 28 January 2013 (UTC)
- Since your concern seems to be transport, why not just make a good fitted tray with handles to sit it upon, one that can be lowered along with it into a bigger box if need be? That way you get what you need without alteration to the item. μηδείς (talk) 20:36, 28 January 2013 (UTC)
Thanks all fpor your ideas. Plasmic Physics (talk) 06:23, 30 January 2013 (UTC)
Stone Age Malthusianism and Cornucopianism
I've read some articles suggesting that resource peaks, climate change and unsolved substitution problems would often have been the issue of the day for preagricultural tribes. Has any work been done on the fitness and popularity of Malthusian and Cornucopian memes (population control, food rationing, exploratory migration etc.) in such an environment, and the difference instinct and cognitive biases would have made? NeonMerlin 14:19, 27 January 2013 (UTC)
- About migration: It's apparent that migrations took place, particularly between remote Pacific Islands (like to Easter Island), which would have had a high risk associated with them (if a major storm hit them in transit, they would all be dead). The only reason I can see to take such a risk is if there was no other option. That is, those people would have died had they stayed put, quite possibly because the resources there were stretched to the limit. However, I don't know if those migrations took place before or after agriculture developed. StuRat (talk) 17:43, 27 January 2013 (UTC)
- Well the Polynesian / Malay migrations happened after agriculture was developed, and they brought pigs, chickens, and other crops where they travelled. Graeme Bartlett (talk) 04:50, 28 January 2013 (UTC)
Proton Density of Various Materials
I'm wondering what the lower and upper bounds of average proton density for various materials and substances might be? I realize this is going to depend a lot on the type of molecules present (eg: density of fruit is much less than that of lead) - I'm just interested in ballpark figures anyway. 75.228.159.2 (talk) 15:13, 27 January 2013 (UTC)
- Very roughly speaking, most atomic nuclei contain approximately equal numbers of protons and neutrons. Since protons and neutrons have (again, approximately) the same mass, about half the mass of most solids is down to the protons, so about half of the total mass density is the proton (mass) density. (The contribution by electrons is negligible.)
- Two important caveats. First, heavier nuclei have proportionately more neutrons (to take an extreme example, uranium-238 nuclei contain 92 protons and 146 neutrons), so if you're looking at elements further down the periodic table you'll want to account for that. Second, the most abundant isotope of hydrogen (hydrogen-1) is a single proton and no neutrons, so pure hydrogen is essentially all protons by mass; extremely hydrogen-rich compounds like methane (1 atom carbon-12 and 4 hydrogen atoms gives 10 protons and just 6 neutrons) will also have a proton-enriched composition.
- As an aside, the term 'proton density' is also used in magnetic resonance imaging to refer to the abundance of hydrogen-1 only, and not to protons that may be part of other heavier nuclei. TenOfAllTrades(talk) 16:25, 27 January 2013 (UTC)
- Ballpark figures: The Atomic mass unit is 1.66×10−27 kg, which is the weight of one proton. Whit a crude assumption that protons represent half of the total mass, you get: lead, about 11000kg/m3 would have 11000/(2*1.6×10−27) or 3*1030 protons per m3, water has 3*1029 protons per cubic metre... Should be correct within 30% I think. Unless I made a mistake somewhere... When you're talking about MRI's, this answer is likely not helpful...Ssscienccce (talk) 16:52, 27 January 2013 (UTC)
Contributing to medical knowledge
Last year I had a run in with a prescription medicine which resulted in my emergency admimssion to hospital. I am wondering if there is a medical database in existence that I can contribute my experience of this drug to? (not medical advice of course) --TammyMoet (talk) 16:54, 27 January 2013 (UTC)
- I'd actually like to see a "Rate My Drug" site where people rate drugs based on side effects, effectiveness, etc. This would have the potential to let people know of problems with drugs far earlier than the normal regulatory process, which unfortunately is filled with conflicts of interest, at least in the US. I wonder, could drug makers sue such a site, even if they post disclaimers that "These are only the opinions of our members". StuRat (talk) 17:36, 27 January 2013 (UTC)
- Here's a real answer: FDA's MedWatch is a great tool, just click the "Report a serious medical product problem" link under "Resources for you" on the left. -- Scray (talk) 17:42, 27 January 2013 (UTC)
- I wouldn't be surprised if on the whole this is discouraged by the powers that be, as it might be be as useful as your would think at first. Patients on the whole lack the knowledge and jargon to accurately describe what is happening to them, leading to possible confusion and contradictory evidence. If this goes via health professionals at least (one would hope) it gets reported in an accurate manner. Fgf10 (talk) 18:39, 27 January 2013 (UTC)
- I'm sure the medical professionals involved reported the adverse event at the time. However, there were symptoms which have disappeared since I stopped taking the tablets which I didn't report at the time because I didn't associate them with the drug in question. It would complete the information around the incident if I could report this cessation of symptoms somewhere. --TammyMoet (talk) 19:33, 27 January 2013 (UTC)
- Well, the "few experts" versus "many novices" paradigms come up often, including here at Wikipedia. The idea here is that many novices can provide more, and hopefully better, info, than a few experts. There are other examples, like stock markets being used to "rate" companies, compared with expert opinions. One problem with "experts" is, that since only a few control the data, they can be bribed or otherwise have a conflict of interest, while it's impossible to do so with millions of people. For example, avoiding damage to the company's profits may also be a concern for the experts, while the novices are only concerned with the health of the patients. StuRat (talk) 04:29, 28 January 2013 (UTC)
- The whole issue of the international development and marketing of therapeutic drugs has some serious concerns. The commercial drugs world is driven much more (perhaps exclusively) by commercial gain than any other reason. Information about the effects of medicines is frequently withheld by manufacturers. It is a murky world. Bad Pharma by Dr. Ben Goldacre lays out clearly and authoritatively the shortcomings of the medicines market, a frightening but essential read if you are concerned about this topic. Richard Avery (talk) 07:44, 28 January 2013 (UTC)
- Well, the "few experts" versus "many novices" paradigms come up often, including here at Wikipedia. The idea here is that many novices can provide more, and hopefully better, info, than a few experts. There are other examples, like stock markets being used to "rate" companies, compared with expert opinions. One problem with "experts" is, that since only a few control the data, they can be bribed or otherwise have a conflict of interest, while it's impossible to do so with millions of people. For example, avoiding damage to the company's profits may also be a concern for the experts, while the novices are only concerned with the health of the patients. StuRat (talk) 04:29, 28 January 2013 (UTC)
Good grief. Some of you have some very unrealistic ideas of why drug reactions are not often "reported" to the FDA, or to the medical literature. By far the LEAST likely explanation is that a doctor whose patient has a reaction to a drug wants to conceal it to protect company profits (Stu: "well if there really was a conspiracy you would say that, wouldn't you?"). The real explanations (1) Reporting is just more unreimbursable paperwork. Doctors in the US spend hours of each day on data collection and entry that patients and payors demand but do not expect to pay for. If it takes 15 minutes to do a drug reaction report on Medwatch, would you pay your doctor to do it? (2) Uncertainty. The patient may be certain that the daily fukitol pill she has been taking for 2 years caused her hair to turn green last week, but the doctor will not be sure that it is not simply coincidence unless the same effect has happened to multiple people taking it. (3) Diffused responsibility. These days, at least in the US, the doctor who prescribed the drug and the one who handled the side effect problem are often different. (4) Ignorance of process. Not sure how many doctors even know about Medwatch. Certainly not all. The percentage of US doctors who have had a patient suffer a side effect from a medication: 99%. Percentage of US doctors who have reported even one reaction to Medwatch: I doubt it's 5%. (5) Pointlessness. Most doctors are aware that nothing happens after a Medwatch report except paperwork. If the drug has been widely used for years, the side effect has already been known. If the side effect is an unknown one, especially if trivial or self-limited, like a headache, it's unlikely due to the drug. Your doctor may not spend as many hours as you imagine pondering your problems when your are not in front of him, but I promise you he does not get paychecks from pharmaceutical companies to keep you ignorant or deny problems. alteripse (talk) 12:38, 28 January 2013 (UTC)
- This could be a broader def of "conflict of interest". The doctor has more of an interest in getting home for dinner than improving the health of patients who aren't even paying him. I still say the cure for most of those issues is for the patients to self-report their own problems:
- 1) The patient is also not reimbursed, but the desire to "tell their story" may make them more willing to report it than the doctor. Also consider that for the doctor to report it, the patient has to have already reported it to them.
- 2) The uncertainty exists in either case. However, I'd prefer to see a count of all reported cases of hair turning green after taking a med, so I can decide for myself if I see a pattern there. A single doctor may only have one such patient, but, around the world, there might be thousands, making the trend far clearer.
- 3) Yes, lack of one doctor responsible for a person's health is a problem. However, I'd restate this as "the patient is ultimately responsible for their own health".
- 4) Ignorance of process will also be a problem with patients, but, with a much larger pool, hopefully enough will self-report incidents for patterns to emerge.
- 5) Here's the main benefit of self-reporting. If there are two meds for a given condition, and one is rated much higher by patients than the other, then this is valuable info a patient can take to the doctor, when asking for a prescription. StuRat (talk) 22:15, 28 January 2013 (UTC)
Sounds good but think a little. Common side effects are already known. Uncommon or previously unreported symptoms are statistically more likely to be unrelated to the drug under suspicion. The only way to solve that is to have a systematic collection of data to determine whether more events than would be expected by chance are happening. There already are internet discussion areas for patients to do exactly what you are suggesting. Most diseases now do have a active support and discussion groups where one could ask exactly the kinds of questions you propose: "has anyone else taking medication X had this symptom?", and "for those of you who have tried both X and Y, which worked better for you". Those are important functions of those types of groups and websites. I am not sure what you have in mind beyond that. As you perhaps have surmised my explanation of what is involved in reporting, what is useful and useless, etc, is derived from actual experience of reporting potential side effects of a specific medication and trying to ascertain what reports had already been made. alteripse (talk) 22:39, 28 January 2013 (UTC)
- Well, a discussion group only works with small numbers of people, not millions. I'd have a list of possible side effects, and each person could enter their med and the side effect they experienced. For hair turning green, for example, there might be a section titled "changes to hair", then a subsection "hair on the head", then a sub-subsection "color changes", and then finally they could select "green" from the list of colors. The software would then note when a statistically significant number of people taking the same med report the same symptom, rather than relying on a discussion group stumbling upon it. If they wished, the patients could also leave their e-mail address, so anyone investigating this side effect could contact them and ask follow-up questions. And note that there would be no Latin ! StuRat (talk) 07:41, 29 January 2013 (UTC)
- Just to inform you all, Jebus's Yellow Card link was the one I was looking for. It seems to be doing what StuRat is talking about above. Thanks to Jebus for that. --TammyMoet (talk) 11:46, 29 January 2013 (UTC)
- There is no statistical software that would be able to analyze the reports for a voluntary database with no control group. And NO LATIN? How ever would we communicate precisely?! alteripse (talk) 12:20, 30 January 2013 (UTC)
- The control group is the general population. If 1% of the people who take a med have hair that spontaneously turns green, compare this with the portion of the general population whose hair spontaneously turns green. StuRat (talk) 05:07, 1 February 2013 (UTC)
- The matched control group is not the general population, but people of similar age, sex, ethnicity, culture, etc who have the disease but have not been treated with the agent in question. A perfect example of the uselessness of uncontrolled data for infrequent risk has been going on for the last few years regarding possible effect of growth hormone treatment in childhood on cancer risk in adult life. This data is scarce but quite important to parents and doctors making treatment decisions. The French tracked down some middle aged adults who had been enrolled in a registry of treated children back in the 70-80s, asked about 10-20 cancers and compared the reported rates to the national cancer rates for adults of the same age range and sex. And then discovered that for a couple of the less common types of cancer there was an higher rate among the former GH patients than current data suggests would be expected in the general population. There are many problems with the data. The first and most obvious is data dredging for multiple variables-- if you ask about enough conditions looking for an association that has less than a 5% chance probability, and you measure 20 associations, the likelihood that you find an association that in isolation looks real but is really chance becomes high. The second problem is that even if the association is real, if the control population is the general population, maybe the condition is a result of disease itself, rather than the treatment. If we didnt have plenty of evidence that retinopathy is a complication of diabetes, your proposed system would be likely to suggest it is a complication of taking insulin. This is the fatal flaw in unsolicited side effect reports. If a couple of patients in an online discussion forum discover they have the same uncommon complication, the next step is that one of their doctors attempts to survey a larger number of patients and if even a couple more cases are turned up they are reported as a possible association. It can live like that for many years as "lore" among doctors who treat the disease and patients who have it, and may or may not be true. I can give you examples of both. Or some doctor can decide to try to systematically study it epidemiologically to settle the question and quantify the risk. Bottom line: I can see no difference between the kind of voluntary "database" you propose and posing the question on a patient website and asking a doctor who treats many cases if he has seen one--- all simply raise the question and have about the same chance that someone will say "hm i know of another case. maybe its real. lets try to find out..." alteripse (talk) 11:47, 1 February 2013 (UTC)
- The control group is the general population. If 1% of the people who take a med have hair that spontaneously turns green, compare this with the portion of the general population whose hair spontaneously turns green. StuRat (talk) 05:07, 1 February 2013 (UTC)
If they give an IQ test to someone who's had a coma, do they subtract the coma length from the chronological age?
What if it's only 0.1 years? (I would still do it) They never asked me if I'd had a coma when they did it in school, though. Come to think of it, are things like brain damage from hits and hypothermia accounted for in the bell curves? Sagittarian Milky Way (talk) 18:31, 27 January 2013 (UTC)
- I don't think anyone can tell exactly how much of an IQ score is a result of "mental age", experience and knowledge, and how much of it is a result of the physical age of the brain. Subtracting the duration of the coma would raise the score for a child but lower the score for an adult, so an adult would have to perform better on the test if he had been in a coma, which seems unlikely. Ssscienccce (talk) 23:47, 27 January 2013 (UTC)
- Adult IQ test results are compared to adults of all ages, so chronological age doesn't really come into play. With children, adjusting the chronological age (say, pretending that a ten-year-old was only five after waking up from a five-year coma) would render IQ testing completely ineffective at measuring the effects of this long hypothetical coma. I'm not sure I completely understand the rest of your question though. EricEnfermero Howdy! 00:09, 28 January 2013 (UTC)
- Our article Intelligence quotient discusses IQ and age. The gist of it is that once upon a time there was one test for children that computed IQ = (mental age / chronological age)*100. That computation is no longer in use. Also, all adults are usually lumped together; a 25 year old who answered all the questions the same way as a 50 year old would be assigned the same IQ score.. Jc3s5h (talk) 00:04, 28 January 2013 (UTC)
- Are you sure? He showed me dividing the mental age by the actual, and it wasn't *that* many years ago. And yes, age isn't involved for adults. Sagittarian Milky Way (talk) 03:34, 28 January 2013 (UTC)
- Those bell curves of IQs. Are they raw data, or do they adjust them for anything? Like is 100 the average of everyone, or the average of everyone if no one had their intelligence artificially altered by things such as brain damage from accidents, thalidomide, lead paint, crack babies and other drug abuse (maternal and personal), their time in a coma, and possibly other things? Shouldn't 50 be commoner than 150 in real world data? For some things you'd want the raw data (like calculating how many mentally retarded need care), for others you'd want the cleaned up data. Sagittarian Milky Way (talk) 03:34, 28 January 2013 (UTC)
- The bell curves are the raw data -- if the researchers want to compare IQs of selected subgroups of people, they essentially have to select scores from just those subgroups and compile the data for those groups from scratch. 24.23.196.85 (talk) 07:28, 28 January 2013 (UTC)
- Are there any studies that correlate hours of sleep per day to IQ?165.212.189.187 (talk) 20:12, 28 January 2013 (UTC)
- Sleep deprivation lowers IQ μηδείς (talk) 23:00, 28 January 2013 (UTC)
- How about too much sleep? and is there an optimal amount of sleep per day for the best potential IQ?165.212.189.187 (talk) 14:50, 29 January 2013 (UTC)
- That's nothing, horniness can lower IQ more than that. Sagittarian Milky Way (talk) 00:41, 29 January 2013 (UTC)
Seasonal changes in temperature
I cannot reconcile the reasons why we have such differing seasonal temperature differences with the answers I have accessed.
If the seasons are mainly dictated by the tilt of the earth, then why should such a small difference in distance (to or away from the sun) of at the most, a few thousand miles cause this when the earth is 93 million miles from the sun? I understand that the earths orbit is elliptical, but given that when the earth is furthest away from the sun (July) the northern hemisphere has its summer. Surely this would be like having a roaring fire in a big room 20 meters away and expecting to feel a difference if you move 1mm nearer? 86.138.72.18 (talk) 18:49, 27 January 2013 (UTC)
- From what I understand, it's not the distance between the earth and the sun but the distance through the atmosphere that the heat radiating from the sun must past through to reach the surface of the earth. Thus, the portion of the hemisphere that is tilting towards the sun is receiving its solar radiation through the thickness of the atmosphere, while that portion of the hemisphere tilting away from the sun receives its heat filtered through a non-straight line through the atmosphere, thus being 1.2, 1.5, etc. of the thickness of the atmosphere, and that's what's causing the dissipation of enough solar radiation to create what we perceive as the drastic temperature changes associated with the seasons. That's whats meant by more direct sunlight as explained in the seasons article. DRosenbach (Talk | Contribs) 19:01, 27 January 2013 (UTC)
- Also this. Shine a flashlight at the center of a ball. Shine it at the edge (same distance) and move your head over that area. It's dimmer. Sagittarian Milky Way (talk) 19:05, 27 January 2013 (UTC)
- To rephrase that: if you hold up a quarter to the Sun, it will get nearly the same amount of light anywhere - North Pole, equator, noon, sunset - subject only to the atmospheric absorption mentioned and some minor differences in distance. But if, in order to be face on to the Sun, that quarter is lying on its edge on an ice floe at the North Pole on the equinox, it casts a shadow all the way toward a distant horizon. All that ice shares the light now blocked by that one lousy little quarter. If it's lying flat on the ground at the Equator, it gets the same light on that one little spot. Wnt (talk) 19:18, 27 January 2013 (UTC)
- Never thought of it that way -- nice! DRosenbach (Talk | Contribs) 21:24, 27 January 2013 (UTC)
- A small cupronickel object about an inch :) Or are you seriously asking what it is? Sagittarian Milky Way (talk) 20:54, 27 January 2013 (UTC)
- HiLo48 comes from an enlightened land where Vegimite is considered to be a delicacy and the coins are 5c, 10c, 20c, 50c, $1 and $2. The closest in size to the 24.26 mm (Inches? We don't need no stinking inches!!) US 25c coin ("quarter") is the 25.00 mm $1 coin. And we really shouldn't use US coins as examples, many Wikipedia editors live elsewhere. --Guy Macon (talk) 21:21, 27 January 2013 (UTC)
- A quarter-dollar is a 25 cent piece. Do you call a 20 cent piece a "fifth" ? This could lead to disappointment when your kid tells you he just found a fifth in the driveway. :-) StuRat (talk) 04:46, 28 January 2013 (UTC)
- Hell no, 20c would be a quinter, not a fifth. If we had a base-12 system, we'd have the 1/6 coin, too, the "sexter." - ¡Ouch! (hurt me / more pain) 11:33, 30 January 2013 (UTC)
- A quarter-dollar is a 25 cent piece. Do you call a 20 cent piece a "fifth" ? This could lead to disappointment when your kid tells you he just found a fifth in the driveway. :-) StuRat (talk) 04:46, 28 January 2013 (UTC)
- Probably the term is not understood by many South Africans and Australians? Because Americans in all their weirdness think the most important attribute of a standard liquor bottle is that it's a fifth of a unit big enough to kill 6.4 men (gallon, 3.785 l), it's called a fifth despite being the whole bottle. (that is, 3.2 of our cups out of 16, 0.8 quarts, 25.6 fluid ounces, 1.6 pints, or about 0.75 liters out of 3.75) Maybe they're never actually 757.082356 mL anymore though, so travellers don't have to pay import tax on the entire thing just because the law is metric. I'd be relieved if my kid's fifth turned out to be an Australian 20 cent piece, though. Sagittarian Milky Way (talk) 18:48, 29 January 2013 (UTC)
- Some of the above is right, some is wrong. The core reason why the Northern hemisphere gets less heat from the sun in winter is because it intersects less sunlight. The sun always illuminates about half the Earth (there is some minimal wiggle room because the sun is larger than the Earth, and also if you take into account atmospheric refraction). In northern summer, the pole is pointing (a bit) towards the sun, and more than half of the northern hemisphere is in the sun. So it gets warmer than average. To make up the difference, less than half of the southern hemisphere is in the sun, so that part gets colder. --Stephan Schulz (talk) 19:38, 27 January 2013 (UTC)
- Or, to put it another way, unless you are near the equator, during the summer the sun gets higher in the sky (warmer for the same reason that the noonday sun heats the ground more than the sun right before it sets does) and the days are longer and the nights shorter. --Guy Macon (talk) 20:38, 27 January 2013 (UTC)
- Some of the above is right, some is wrong. The core reason why the Northern hemisphere gets less heat from the sun in winter is because it intersects less sunlight. The sun always illuminates about half the Earth (there is some minimal wiggle room because the sun is larger than the Earth, and also if you take into account atmospheric refraction). In northern summer, the pole is pointing (a bit) towards the sun, and more than half of the northern hemisphere is in the sun. So it gets warmer than average. To make up the difference, less than half of the southern hemisphere is in the sun, so that part gets colder. --Stephan Schulz (talk) 19:38, 27 January 2013 (UTC)
- A simple experiment I recall from High School science class: in a darkened room shine a flashlight at a given distance directly overhead (90°) on a piece of graph paper. Count or calculate the number of squares that the light covers. Do the same (same distance) but this time have the flashlight at a specific angle (eg: 45°) - The number of squares covered by the light is larger. Think of the flashlight as a "beam" of sunlight. The same amount of light (energy) is spread over a larger area, and thus the energy per "square unit" is less. For me, this helps explain why a change of angle on the Earth's surface (due to axis tilt) creates a seasonal difference in temperature. ~:74.60.29.141 (talk) 23:02, 27 January 2013 (UTC)
- P.s.: this experiment explains how latitude affects climate; but also helps explain seasonal variation - which also includes shorter daylight hours, etc. 74.60.29.141 (talk) 23:14, 27 January 2013 (UTC)
- ... and, just to show that distance from the sun makes only a small (7%) difference, the northern hemisphere is currently nearer to the sun in winter and further away in summer. (Perihelion was on January 3rd.) Dbfirs 12:26, 28 January 2013 (UTC)
January 28
Why don't the the sunrise and sunset times go the other way at the solstice?
While we're on the subject of the Earth's orbit... I always assumed that after the 21st of December, or thereabouts, the sunset got later each day. However, when I look up the sunrise and sunset times for about this time for, say, London, although the days begin to get longer overall after the solstice, the sunsets don't necessarily get later or the sunrises earlier - one or the other will happen but not both together. There seems to be a couple of weeks discrepancy either side of the solstice before everything coincides and we get both a later sunset and an earlier sunrise on the same day. What causes the discrepancy? Does it depend on where you are on the surface of the Earth and if so, is there somewhere where everything changes on the day of the solstice and the following day the sunrises start to get earlier and the sunsets later? Richerman (talk) 00:22, 28 January 2013 (UTC)
- It would work that way if the earth's orbit was both flat and circular. However, it isn't. The earth's orbit is oblique (offset by an angle to the plane of the solar system) and the orbit is an ellipse. As a result, there is are the deviations you note from the expected "symmetry" at the equinoxes. See Equation of time for these corrections. --Jayron32 00:39, 28 January 2013 (UTC)
- I'm not quite sure what your getting at, but one cause of variation of the time and date of the solstices and equinoxes is that our calendar is not exactly one tropical year long, so the times of these phenomena vary quite a bit depending on how long it has been since a leap day. Jc3s5h (talk) 01:15, 28 January 2013 (UTC)
- Interesting answer Jayron. I see the point of the elliptical motion, but I don't get the relevance of the inclination of the Earth's orbit relative to the solar system. If the other planets weren't there, there would be no plane of the solar system other than the Earth. Or am I missing something? IBE (talk) 07:33, 28 January 2013 (UTC)
- What you've got is a complex interaction of several planes, none of which line up. There's the ecliptic, which is the plane that the sun occupies in its apparent motion in the sky if you hold the earth still. Then there's the Celestial equator, which is a projection of the earth's equator out to infinity. These planes are offset from each other by an angle; that angle is called the "obliquity" of the Earth's orbit. The fact that these planes don't match has to be taken into account when calculating the difference between the "mean solar day" (i.e. an exact 24 hour period measured on the clock) and the "apparent solar day", which is the time between the reappearance of the sun in the same position on successive turns of the earth around its axis. Over the course of an entire year, this averages out to the exact 24 hour day, but on any given day of the year, the length of the actual day (from the sun appearing in the same location on successive turns of the earth) varies depending on exactly where the earth is in its orbit. Both the ellipsoidal nature of the orbit, and the angle of the rotation relative to the ecliptic (the obliquity defined above) will shorten or lengthen the actual length of the day. These differences account for the fact that, for example, the day doesn't lengthen uniformly as one moves away from the solstice: the shortest day is not the same as both the latest sunrise and the earliest sunset, which is what the OP was asking about. In New York City this past December (see [2]), for example, the Winter Solstice occurred on December 21, however the earliest sunset occurred on about December 8 (4:28 PM) and the latest sunrise didn't occur until sometime during the first week of January (7:20 AM). There's thus a discrepancy caused by the Equation of time: The shortest daylight (solstice) of the year is neither the day with the earliest sunset, nor the day with the latest sunrise. It would be if the earth had a perfectly circular orbit AND had a perfectly perpendicular one WRT its axis. The fact that the earth's orbit is neither circular nor perpendicular is why there is a discrepancy. --Jayron32 07:51, 28 January 2013 (UTC)
- I was with you until nearly the end. If the Earth's orbit were perfectly circular, and its axis perpendicular to the plane of that orbit, there wouldn't be a 'shortest day' - all days would be alike. AlexTiefling (talk) 10:19, 28 January 2013 (UTC)
- Also, it's important to note that the length of a day is sinusoidal, with very little change near the peak (summer solstice) and trough (winter solstice), and rapid change at the vernal and autumnal equinoxes. This allows minor effects from the non-circular orbit, etc., to have a noticeable impact around the solstices, while those effects are swamped out at the equinoxes. StuRat (talk) 04:40, 28 January 2013 (UTC)
- We are accustomed to thinking that the length of a day is 24 hours. Our clocks accurately measure 24 hours each day but the length of a day, measured by observing the position of the sun above the horizon, is only 24 hours when averaged over a whole year. It actually varies on a daily basis – sometimes a little longer than 24 hours and sometimes a little shorter. Relative to a clock, there is a discrepancy on all but a couple of days a year. This discrepancy aggregates and causes a difference between clock time and solar time. It is this difference that causes the day of earliest sunrise to occur a couple of weeks before the summer solstice, and the day of latest sunset to occur a couple of weeks after the summer solstice. Similarly, the day of latest sunrise occurs a couple of weeks after the winter solstice; and the day of earliest sunset occurs a couple of weeks before the winter solstice.
- The reason for the slight variation in the length of a day, measured by observing the position of the sun above the horizon, is because the Earth’s orbit is elliptical rather than circular. When the Earth is closest to the sun it is moving at its fastest; and it is moving at its slowest when it is farthest from the sun – Keppler's Laws. Also the distance to the sun influences the angle through which the Earth must rotate before the sun appears above the horizon in the same position as the previous day. Dolphin (t) 11:41, 28 January 2013 (UTC)
- Note that when I said "the length of a day is sinusoidal", I meant the dawn-to-sunset period, not the (approximately) 24-hour period. StuRat (talk) 22:00, 28 January 2013 (UTC)
- The asymmetry of dawn and dusk puzzled me for about 40 years (at least on the occasions I thought about it) because it is world-wide and no geographical features can explain it. It was only when I properly understood the equation of time (explained above) that I realised I was looking for an explanation of something not real, but created artificially by clocks. Sunrise and sunset are always symmetrical about local noon (except for local geographic effects), but local noon drifts each side of clock noon, causing an apparent asymmetry with earliest sunset seeming to occur in early December, and latest sunrise being not until early January by the clock. The reason (put simply) is that local noon is changing rapidly with respect to clock time around the winter solstice. This will not always be so -- see Milankovitch cycles. Dbfirs 12:17, 28 January 2013 (UTC)
- Dbfirs's comments about "December" and "January" are true for the northern hemisphere, but not for the southern hemisphere. The facts can be summarised in a way that is equally valid for both hemispheres: The day of earliest sunrise occurs earlier than the summer solstice, and the day of latest sunrise occurs later than the winter solstice. Similarly, the day of earliest sunset occurs earlier than the winter solstice and the day of latest sunset occurs later than the summer solstice. Dolphin (t) 23:35, 28 January 2013 (UTC)
- Sorry, Dolphin, for accidentally forgetting Australia etc. We do this too often in the north! Thanks for your improvement on my statement. Dbfirs 21:06, 29 January 2013 (UTC)
- My pleasure! (We had 42 deg Celsius a week ago!) Dolphin (t) 03:56, 30 January 2013 (UTC)
- We've just had a fortnight with the temperature constantly below zero, but it is milder now (daytime max.7C) with floods yesterday afternoon. Couldn't we exchange a little bit of weather? Dbfirs 08:17, 30 January 2013 (UTC)
- My pleasure! (We had 42 deg Celsius a week ago!) Dolphin (t) 03:56, 30 January 2013 (UTC)
- Sorry, Dolphin, for accidentally forgetting Australia etc. We do this too often in the north! Thanks for your improvement on my statement. Dbfirs 21:06, 29 January 2013 (UTC)
- Dbfirs's comments about "December" and "January" are true for the northern hemisphere, but not for the southern hemisphere. The facts can be summarised in a way that is equally valid for both hemispheres: The day of earliest sunrise occurs earlier than the summer solstice, and the day of latest sunrise occurs later than the winter solstice. Similarly, the day of earliest sunset occurs earlier than the winter solstice and the day of latest sunset occurs later than the summer solstice. Dolphin (t) 23:35, 28 January 2013 (UTC)
- The asymmetry of dawn and dusk puzzled me for about 40 years (at least on the occasions I thought about it) because it is world-wide and no geographical features can explain it. It was only when I properly understood the equation of time (explained above) that I realised I was looking for an explanation of something not real, but created artificially by clocks. Sunrise and sunset are always symmetrical about local noon (except for local geographic effects), but local noon drifts each side of clock noon, causing an apparent asymmetry with earliest sunset seeming to occur in early December, and latest sunrise being not until early January by the clock. The reason (put simply) is that local noon is changing rapidly with respect to clock time around the winter solstice. This will not always be so -- see Milankovitch cycles. Dbfirs 12:17, 28 January 2013 (UTC)
Here is my question on Y/A. I also need help
http://answers.yahoo.com/question/index;_ylt=As0ZpbMolRquFSK_zBvOfgu9DH1G;_ylv=3?qid=20130128225553AA09QS1 Eclectic Eccentric Khattak No.1
ninhydrin
ninhydrin is said to test for the prescence of amino group so it can be used in factories manufacturing textiles from milk does it have any effect on food substrate that it can not be used in the later future — Preceding unsigned comment added by 212.49.86.44 (talk) 05:47, 28 January 2013 (UTC)
- I've never heard of that particular usage of ninhydrin. It's main use is to turn purple in the presence of free amines; there are several specific uses in the Wikipedia article titled ninhydrin. If you have a link to the usage you are talking about, could you provide it here so we can look at it and help you out? --Jayron32 06:05, 28 January 2013 (UTC)
T90 and leclerc tanks autoloader
How the ammunition is transferred to the autolloader after it becomes empty ( from outside or inside the tank ?) do you think it is practical to have the ammo seperated ? does not that expose the crew to the threat of being attacked during reloading the autoloader? how much time does it take to reload the autoloader ? — Preceding unsigned comment added by Tank Designer (talk • contribs) 12:57, 28 January 2013 (UTC)
- From the look of things here, the remaining 21 rounds stored outside the autoloader would have to be reloaded from the outside ("special packing cases in the hull" doesn't sound much like a glovebox to me). As for whether this is practical, etc, all design considerations are tradeoffs. Certainly every tanker would love to carry a few hundred rounds of ammo, all immediately and internally accessible, without otherwise impairing the performance of the tank, but that can't be done. Russia has a long history of reasonable tank design, and I strongly suspect that their tactical doctrine maps to having at most 20 rounds available per engagement, at which point the tank retires from direct combat to re-arm. Web fora discussing the T-72 put in-field reloading by the commander and gunner from carried ammo at 10-15 minutes under such conditions. — Lomn 14:52, 28 January 2013 (UTC)
- Also note that tanks with huge ammo stores defeat the purpose of the ammo. The purpose of the ammo is to be used, and if you have huge ammo loads on few platforms, most of the ammo wouldn't be used. Up (or down) to a certain point, another tank is more important than another few shots in a long firefight - and each round that explodes when the carrying tank is destroyed is a wasted round.
- And the gun can't keep firing at full speed either. It'll heat up and degrade towards the end of a 20-round bombardment - and going beyond that point invites disaster (cook-off). - ¡Ouch! (hurt me / more pain) 11:33, 30 January 2013 (UTC)
Great Lakes region before last glacial period
Hi. I'm looking for a map, or description, or, well, just about any kind of information on the Midwestern US before the last glacial period. As I understand it from reading the Great Lakes article, the present lakes were created as the Laurentide ice sheet retreated at the end of the Wisconsin glaciation, and they only assumed their current shape a few thousand years ago. But why shouldn't the same have happened after the previous glacial period?
Many thanks JaneStillman (talk) 16:08, 28 January 2013 (UTC)
- There probably were lakes there in the past, but the scouring effects of the last glaciation would have altered the lake boundaries and removed many of the markers of the previous shoreline. Hence it would be very challenging to say what the lakes looked like in the past. I'm not aware of any sources that speculate on specific boundaries during prior interglacial periods. Dragons flight (talk) 18:16, 28 January 2013 (UTC)
- Hm, okay. Thanks, Dragons flight :) JaneStillman (talk) 09:40, 30 January 2013 (UTC)
Chicken nutrition revisited
A few days back I asked a question about protein content in cooked chicken. In that post, User:Sagittarian Milky Way pointed out the protein content in cooked chicken is also dependent on its water content. More water out, protein content increases. So I think it is better to analyze protein content of raw chicken. I searched Google and found different sources are making different claims.
The following data are for 100g raw chicken:
- NutritionData: Ground Chicken - 17 g
- myfitnesspal: Chicken breast without skin - 23 g
- myfitnesspal: Chicken breast with skin - 21 g
- livestong: Chicken breast - 23 g
- calorie count: Ground chicken - 19.5 g
My questions are:
- What is the exact amount of protein in 100 g raw chicken muscle?
- What is the nutritional value of chicken skin? Is it mostly fat?
- Is the nutritional value of regular chicken muscle different from that of ground chicken? --PlanetEditor (talk) 16:16, 28 January 2013 (UTC)
- I browsed around the Nutrition.gov website and easily found the Nutrition Database, in which you can look up factual answers your questions for various types of generic- and brand-name food products, including raw chicken, and different cuts of chicken meat. Nimur (talk) 16:48, 28 January 2013 (UTC)
- Chicken, roasting, meat only, raw. What does "roasting" mean here? --PlanetEditor (talk) 16:54, 28 January 2013 (UTC)
- I think it refers to the expected use of the meat, as an indication of the grade or quality of product that the data relate to. AlexTiefling (talk) 17:00, 28 January 2013 (UTC)
- Roasters and broilers are age ranges which are best cooked with those methods, I think. Sagittarian Milky Way (talk) 19:07, 28 January 2013 (UTC)
- I think it refers to the expected use of the meat, as an indication of the grade or quality of product that the data relate to. AlexTiefling (talk) 17:00, 28 January 2013 (UTC)
- I think the OP is looking for a more precise answer than actually exists. The exact ratios of fat to protein in chicken will vary from muscle to muscle (thigh meat will have different values than breast meat), and also to some extent from bird to bird. The numbers you get are probably going to have some degree of variability, and you're simply not going to get a single number which is scrupulously correct for all parts of every chicken in existence. --Jayron32 18:39, 28 January 2013 (UTC)
- To some extent, that's why the top-level nutrition recommendations from the USDA use a much more coarse unit of measure than the gram: the "unit of protein." More recently, the USDA is using the terminology "ounce-equivalent of protein food." But still, the idea is that you've only really got about one significant figure of reliable data. There's just too much variation between one meal and the next to measure protein content to several decimal places. Nimur (talk) 22:34, 28 January 2013 (UTC)
- Also take into account that a lot of chicken meat has been subject of plumping, thereby significally increasing the amount of water in the meat. --Saddhiyama (talk) 10:18, 29 January 2013 (UTC)
Resonant frequency of an inductor
What parameters affect the frequency of a particular inductor? Is it solely the number of turns of wire? How does the cross section and composition of the wire, width of core, and spacing between turns affect yhe frequency, if at all? One more thing, I once saw a simple inductor that consisted of like 50 turns of wire then 25 turns back over the first layer. What is the purpose of that? A self-transformer or some such?! 75.228.139.34 (talk) 17:12, 28 January 2013 (UTC)
- The first equation in our article Inductor#Inductance_formulae says that the inductance is proportional to the square of the number of turns, to the cross-sectional area of the coil and inversely proportional to the length of the coil. The cross section of the wire and the composition don't affect the inductance. The spacing between turns does matter because a wider spacing increases the length of the coil (for some given number of turns). SteveBaker (talk) 17:35, 28 January 2013 (UTC)
- By itself, an ideal inductor has a reactance that's proportional to the applied frequency, so it doesn't have a resonant frequency. You need a more complicated circuit, like an LC circuit, for an inductor to form part of a circuit that has a resonant frequency. The resonant frequency in hertz of an LC circuit is
- into which you can plug one of the various inductance formulas for . Red Act (talk) 19:34, 28 January 2013 (UTC)
- The self-resonant frequency of a real inductor due to parasitic capacitance isn't all that straight-forward to model or calculate. For example, for a single-layered air-cored inductor, if the inductor has a moderately large number (>20) of turns, the self-capacitance is fairly purely a function of the size and shape of the coil, and is unrelated to the number of turns, but for a very small number of turns, the self-capacitance increases with the number of turns. As another example, the skin effect within the coil's conductor becomes increasingly important at higher frequencies, and unlike with isolated conductors, the skin depth becomes more complicated when the turns in a winding are close to each other, due to the proximity effect. And the winding pattern of a real inductor is also an important consideration, with basket winding often used to lessen the parasitic capacitance. Real inductors are often modeled with a 3-element RLC circuit, but that simple model isn't very accurate at frequencies greater than about a fifth or a tenth of the inductor's self-resonant frequency. Real inductors can be modeled more accurately at higher frequencies with a four-element or five-element circuit.[3] Red Act (talk) 17:48, 29 January 2013 (UTC)
Sci fi
How much of sci-fis like star trek could actually become reality. Clover345 (talk) 17:40, 28 January 2013 (UTC)
- This is much too broad a subject for the Reference Desk. There are entire books on subjects like The Science of Star Trek dedicated to exploring questions like this for any given series. However, it's fair to say that some things appear to be entirely impossible - faster-than-light spaceships, matter transmission, instant subspace/ansible communications - and others merely beyond our current capabilities - laser weapons, human-level AI. And some SF (including Star Trek) includes mention of fantasy elements like psychic powers, which do not even pretend to represent developments of real science. AlexTiefling (talk) 17:45, 28 January 2013 (UTC)
- We do have an article: Physics and Star Trek, which includes a list of 'Further reading' suggestions. ~E:74.60.29.141 (talk) 19:19, 28 January 2013 (UTC)
Almost none of it. No alien species that can be played by humans in facial prostheses and bad make-up, no accidental resemblance of all alien languages to English dialect or period accents, no time travel, no FTL travel, no artificial gravity plating, no interbreeding with life from other planets, no transporters. We do have primitive "phasers" of various sorts, and we already have "communicators that would boggle Jean-Luc Picard. Hope I am wrong about warp speed, but if I were wrong the aliens would already be here. μηδείς (talk) 20:26, 28 January 2013 (UTC)
- That's hardly fair. If you look at older sci-fi, and ask how much of that came true - and quite a bit has. The original series StarTrek communicators, for example, are completely dwarfed in performance by modern mobile phones...remember the "Dick Tracy" wrist-TV communicator...again, easy! Everyone in StarTrek uses piles of hand-held tablets that look just like an iPad (god knows why they needed so many of them - perhaps multitasking operating systems didn't exist!) Computer AI isn't there yet - but take a look at "K9" (Dr Who's robot dog) and it's just laughably primitive compared to modern robotics. Even R2D2 is a fairly awful robot compared to what we can currently manage. "Data" on StarTrek has fancier AI and amazing motors and power supplies - but the current generation of Japanese humanoid robots have a physical appearance that's much more natural.
- I vividly recall the first gen StarTrek medical teams having that thing that could squirt drugs right through your skin without needles...that technology is also here today. Several companies now make "tricorders" that you can actually buy.
- In Hitchhikers' guide to the Galaxy, there is this little electronic book that has the complete repository of all knowledge contained within it. Welcome to Wikipedia on your tablet computer (you have to cross out the Apple logo and write "DON'T PANIC!" in large friendly letters on the back yourself though).
- Much of SciFi's gadget predictions are things that we utterly wouldn't want - we could almost certainly clone humans right now - but we've decided that it's "A Bad Thing", so we don't. We can put jellyfish DNA into a poodle so it glows in the dark...we could certainly do similar tricks to make genetically distinct humans - but it's another "Bad Idea", so we don't.
- The more recent the SciFi is, the tougher it is for us to meet what they show - but that's mostly because authors of SciFi have to continually push the envelope to make the future seem different from today. But if you go back a generation or two, look at 40 to 50 year old SciFi, it's quite impressive how many of those things we now have working. Obviously there will always be totally impractical things (Warp drive, transporters, etc) - but the vast number of small gadgets they have can easily be improved upon these days. SteveBaker (talk) 21:02, 28 January 2013 (UTC)
- I don't agree that "The original series StarTrek communicators, for example, are completely dwarfed in performance by modern mobile phones". Our cell phones require installing a vast network of cell towers around the planet, while the ones in ST work more like satellite phones. Being able communicate with each other and ships in orbit, seemingly without the need for a recharge, and without a visible antenna or massive battery pack, is impressive. StuRat (talk) 21:47, 28 January 2013 (UTC)
- Steve, I'll tell you why the TNG people have so many tablets! So you can get a big-picture understanding of your work by physically spreading documents over your whole desk. If I could replicate as many free tablets as I wanted, I'd have at least half a dozen on my desk alone! APL (talk) 08:04, 29 January 2013 (UTC)
- I'd have one on me at all times, and a decent computer too. One with a real keyboard. Without touch feedback (e.g. on-screen keyboard), I'm as slow compared to a real keyboard as a human is compared to Data. The PADD would be just right for the things where a full-sized computer feels like "too much" (e.g. calculator app, browsing thru mails, etc). The most remarkable feat of ST communicators is that Fed manages to keep all the spam and the cold calls away.
- *bleep* "Mr Picard, it has been ten months since you departed from Risa. Would you like to spend your next holidays with us in <insert stardate>, please order now for a 20% discount..."
- "Listen up cretin, this is not the time, we're about to strike a Borg mothercube, to save your stupid little derriere from assimilation."
- "OK sir, I'll call again in five minutes then..." *bloop*
- bleep* (Riker's comm this time) Riker doesn't answer the call.
- bleep* ... *bleep* Riker to Picard: "Risa's calling me too Sir..."
- Troi: "Hey, we've been to Risa, too. We could use that discount, don't you think so???"
- Data: "Too soon. It's been nine, not ten months for the two of you. No discount for you, but an upgrade to the Riker family." :P - ¡Ouch! (hurt me / more pain) 11:56, 30 January 2013 (UTC)
- I'd have one on me at all times, and a decent computer too. One with a real keyboard. Without touch feedback (e.g. on-screen keyboard), I'm as slow compared to a real keyboard as a human is compared to Data. The PADD would be just right for the things where a full-sized computer feels like "too much" (e.g. calculator app, browsing thru mails, etc). The most remarkable feat of ST communicators is that Fed manages to keep all the spam and the cold calls away.
- Jet injectors existed already when Star Trek debuted, so that wasn't a prediction. Capacitive touch screens also predate the show, and CCDs and TN LCD panels came just a few years later. It took decades for them to become cheap and reliable enough to show up in consumer devices, but it's not as though no one had thought of the idea before that. We obviously have nothing resembling a tricorder, though there may be novelty devices sold under that name. A mere visual similarity of current devices and classic TV show props is not all that interesting when you consider that the designers probably saw those TV shows.
- I'm amused that you think present-day androids look more human than Brent Spiner. Obviously he was given makeup and contacts for dramatic purposes, and not because the writers believed that technology would get that close to perfectly simulating a human and yet be unable to get the skin or eye color right. Likewise, K9 was primitive because of budget limitations and not because the writers were making some sort of prediction that we've now surpassed.
- The HHGTTG was a conventional encyclopedia, as far as I can remember, aside from the rather lax management and being served up over the sub-etha (which can't actually be done at a galactic scale). Douglas Adams also described humanity as a species so primitive that it still thought digital watches were a pretty neat idea. We've since decided they're not so neat, so you could count that as a successful prediction. We currently think 3D movies and cloud computing are neat, but neither one for the first time, and I'm sure those pendulums will swing back at some point. I hope the current fad of keyboardless computers will pass too. Sue Gardner mentioned at a talk I attended that they have noticeably decreased the number of casual Wikipedia editors since they make it hard to do anything but passively browse.
- The International Space Station (mentioned by APL below) is mostly a sad reminder of what we haven't achieved in that area. It's not a bustling hub for commercial flights to the moon and Mars. Every trip to it is enormously expensive, and the US no longer even has the capability to get there since it scrapped the shuttle program with nothing to replace it. The station struggles to find a scientific purpose to justify its existence, and to a large extent has relied on publicity stunts and experiments that are more hype than substance.
<- There's a tractor beam that could pull very tiny spaceships.[4] Sean.hoyland - talk 07:24, 29 January 2013 (UTC)
- There is a 1956 Asimov story called The Dying Night that revolved around the idea that in the future all scientists would carry portable pen-sized document scanners that used photographic film. Nowadays, digital pen-sized document scanners are available, but most people use the cameras on their phone as document scanners. Not just scientists, either. So there technology has surpassed SciFi.
- The 1909 short story The Machine Stops, by E.M. Forrester, revolved around the idea that in the future there would be "The Machine" that allowed us to transmit ideas and information instantly across the globe. It follows this to its logical conclusion and describes people called "lecturers" who are basically bloggers.
- Previous to 1969 there were a whole lot of stories, novels, and movies about sending men to the Moon. NASA managed to pull that off a few times.
- Space Stations once only existed in SciFi, now they shoot Sesame Street segments up there.
I'm sure we could all come up with many examples of SciFi technology that became real. APL (talk) 08:25, 29 January 2013 (UTC)
- ON A more serious note, it looks like some devices (pads, clam-shell phones) are only so popular because their imaginary counterparts were in a popular -fi show. Do we have articles on that (Psychological effects of science fiction? List of electronic devices resembling science fiction props?) I found Cultural influence of Star Trek so far. And Sexuality in Star Trek, FWIW. - ¡Ouch! (hurt me / more pain) 11:56, 30 January 2013 (UTC)
diabtes
my mother had diabetes, and when it was hereditary in her family, her parents also. I am one of 5 children. what are the chances that I will also inherit diabetes? --109.232.72.49 (talk) 21:13, 28 January 2013 (UTC)
- This page answers your question in detail. It's pretty complex it seems, and it depends on the type of diabetes. Furthermore, external factors play a role in addition to your genes. - Lindert (talk) 21:25, 28 January 2013 (UTC)
Lindert is right that it depends on what type of diabetes and some characteristics of you. But the page link is simplified and only covers the two most common types of diabetes. Other types are more strongly inherited. If you can give us some more details of your mother's diabetes we might be able to do better. alteripse (talk) 22:15, 28 January 2013 (UTC)
- There are signs of and very helpful treatments for pre-diabetes. See your physician as soon as possible. μηδείς (talk) 22:54, 28 January 2013 (UTC)
Please note that diagnosing someone's genetic susceptibility to diabetes is a sort of medical advice that the Refdesk can't give. Though obviously knowing your age and the type of diabetes your relatives had is a great first step in understanding, trying to collect more data about your case with the notion of giving you an individualized answer ... is still going to be laughable compared to any actual medical visit. However, I believe more than most here that we should give people helpful information in these cases, and so I approve of Lindert's response provided that you understand this is recommended reading, not a personal answer for you. It may also be helpful to read about prediabetes. You may also want to learn more about a blood glucose meter, a useful over-the-counter tool - in the end, biology accepts no substitute for hard data. Wnt (talk) 04:52, 29 January 2013 (UTC)
- Medeis is half right. There are useful treatments. However, there are no signs and symptoms of prediabetes, just statistical risk factors. And giving a probability is less medical advice than "see your doctor"-- which IS medical advice. alteripse (talk) 12:15, 30 January 2013 (UTC)
- Our article says there are no signs or symptoms, then, oddly enough, lists them. I suspect what they meant to say is that there are no distinct signs or symptoms, because they are the same as for type 2 diabetes, but to a lesser degree. This should really be clarified in the article. I will add the word "distinct". StuRat (talk) 18:57, 30 January 2013 (UTC)
- More clearly, a prediabetic may not be aware of any symptoms, but there are diagnostic methods of detecting it. μηδείς (talk) 18:38, 30 January 2013 (UTC)
- One symptom they didn't list, which I associate with pre-diabetes, is the glucose spike/glucose crash cycle. That is, while normal individuals can eat sweets and be OK, a pre-diabetic may have a dramatic sugar spike (and become hyper), followed by a sugar crash (where they become lethargic). Is this not a recognized symptom ? StuRat (talk) 19:07, 30 January 2013 (UTC)
- No. There is no evidence that people who describe symptoms that they imagine are due to sugar "spike and crash" have a specifically higher diabetes risk than that of their demographics: typically american culture, more than high school education, less than endocrine fellowship education." Most of them have no demonstrable abnormality of glucose metabolism. Overall lifetime diabetes risk for that category of americans is higher however than the lifetime diabetes risk for uneducated Tuaregs. However, again we can play the "change the characteristics and change the odds game." Let's say your putative patient with sugar "spike and crash" symptoms actually meets both postprandial hyperglycemia and Whipple criteria-- has checked glucose after meals and confirmed it is high, and has checked glucose while shaky and confirmed it is low. Actually meeting both criteria at the same time would make him quite unusual but interesting. He might have glycogen synthase deficiency a very rare defect of glycogen metabolism, which does not carry a higher diabetes risk, or if black and female, might be an adolescent with type A insulin resistance, in which case her future diabetes risk is quite high. It all depends on details. alteripse (talk) 19:27, 30 January 2013 (UTC)
- One symptom they didn't list, which I associate with pre-diabetes, is the glucose spike/glucose crash cycle. That is, while normal individuals can eat sweets and be OK, a pre-diabetic may have a dramatic sugar spike (and become hyper), followed by a sugar crash (where they become lethargic). Is this not a recognized symptom ? StuRat (talk) 19:07, 30 January 2013 (UTC)
I agree entirely with the policy that we cannot diagnose and treat medical problems here. We can however answer general medical questions about general situations. For example: What is the chance that the son of a woman diagnosed with diabetes will get diabetes? For the general population, assuming the broadest possible defintion of "mother diagnosed with diabetes", the most probable answer is something in the ballpark of "about 20% of offspring of a woman diagnosed with diabetes will be diagnosed with diabetes during their lifetimes". However, as noted above, with more information about the type of diabetes and the age of the offspring, it might be possible to give a much more specific statistical probability without actually giving medical advice. For example, "If an American woman of european ancestry, normal body build, and no known relatives with type 2 diabetes was diagnosed in childhood with ordinary type 1 diabetes with positive antibodies and an insulin requirement from diagnosis, her offspring have a 3% likelihood of being diagnosed with type 1 diabetes in their lifetimes, an about 2.5% chance of being diagnosed with type 1 before age 20, and a 5% chance of being diagnosed with type 2 diabetes after age 40". But if any single characteristic in that description were changed, the probability would change as well. For example, "If an american woman of hispanic ancestry, overweight, hypertensive and hypercholesterolemic, and 3 relatives with type 2 diabetes, was diagnosed at age 40 with type 2 diabetes, her male offspring have about a 5% chance of being diagnosed with type 2 diabetes before age 20, a 20% chance by age 40, and a 60% lifetime chance; this is 10% higher if offspring is female, 50% higher if offspring is obese, and 70% if hypertensive and hypercholesteroleic". If parent has characteristics of monogenic diabetes, the odds may be 50%. So the probability is all over the place, ranging from as low as 1% in non-american populations with low obesity rates to >90% in americans of certain high risk ethnicity and other risk factors. Given that the original question does not seem to have been posed by a literate native english speaker, this may be the type of answer way beyond comprehension and usefulness. But it would still be a permissible statistical fact, and not a matter of "giving medical advice." alteripse (talk) 19:13, 30 January 2013 (UTC)
Leg cramps
- This question has been removed as it may be a request for medical advice. Wikipedia does not give medical advice because there is no guarantee that our advice would be accurate or relate to you and your symptoms. We simply cannot be an alternative to visiting the appropriate health professional, so we implore you to try them instead. If this is not a request for medical advice, please explain what you meant to ask, either here or at the talk page discussion (if a link was provided).
- Do see your physician, there are various successfully treatable conditions that require medical diagnosis and assistance. μηδείς (talk) 22:52, 28 January 2013 (UTC)
The rank of gram positive bacteria
What is the taxonomic rank of Gram Positive Bacteria because I just can't seem to find the answer. — Preceding unsigned comment added by Lightylight (talk • contribs) 23:07, 28 January 2013 (UTC)
- Gram positive bacteria were previously identified with one or more phyla (e.g. Firmicutes), but we now believe that such bacteria are not strictly a monophyletic group, and as such the most widely used system of bacterial classification does not use gram staining to define taxonomic nomenclature. The article on gram positive bacteria gives more information. Dragons flight (talk) 23:25, 28 January 2013 (UTC)
January 29
Question about the Sunrise and Sunset
It is nuff-said that the spherical earth not only rotates about its own axis but also orbits the sun in an elliptical orbit. If this is true then why the sun rises and sets almost at the same time on surface area of globe which is on or close to equator exempli gratia Sri Lanka when there is a geometrical shift change of sunrise and sunset on antipodal points of earth during its advancement from summer solstice to winter solstice in its yearly orbit for its four orbital events around the sun?
Places on earth experience day when facing sun while night if not at any given instant during the advancement of earth in its elliptical orbit around the sun. For the geometrical shift change of sunrise and sunset on antipodal points, let’s imagine earth positions at following
1- Summer solstice: Sun rises at “A” and sets at “B” while at
2- Winter solstice: Sun rises at “B” and sets at “A” similarly
3- Vernal equinox: Sun rises at “C” and sets at “D” while at
4- Autumnal equinox: Sun rises at “D” and sets at “C”
Since nights lag behind by *12hrs when earth moves around the sun in its yearly orbit from summer solstice (or vernal equinox) to winter solstice (or autumnal equinox) OR on any antipodal orbital (elliptical) points and similarly 24hrs in one complete cycle around the sun, ergo, shouldn’t the timing of sunrise on globe at its summer solstice be varied to sunset time instead of regular sunrise time for its winter solstice?
- I didn’t calculate the lagging time
108.173.128.208 (talk) 03:47, 29 January 2013 (UTC)Eclectic Eccentric Khattak No.1
- I'm not entirely sure I understand your question here, but there's a discussion above which may be related, titled "Why don't the the sunrise and sunset times go the other way at the solstice?". The article equation of time may be useful for you as well. --Jayron32 03:51, 29 January 2013 (UTC)
- I may be way off the mark, but I think what you're looking for is that the sidereal day is different from the solar day. We set the 24-hour length of the day as the average time from highest sun to highest sun, not the time it takes the Earth to rotate in place! So the stars rise at different times each day. The solar day is only an average - as the equation of time states, there is some variation from month to month based on the Earth's imperfectly circular orbit, which makes the Sun appear to move faster relative to the stars at some times of the year rather than others. Wnt (talk) 05:05, 29 January 2013 (UTC)
I am unable post a diagram but you can find the four important events of earth in the following link when it orbits the sun in its elliptical orbit.
For simplicity
Mark two antipodal points “A” and “B” on the earth at its position #1 in the link diagram where the sun rises at “A” and sets at “B”. Move the same globe to its position #3. You can find the shift change in sunrise and sunset after copmaring position #1 and #3 i.e.
Sun rises at “A” and sets at “B”in position #1 while sun rises at “B” and sets at “A” in position #3
Mark two antipodal point “C” and “D” on the earth at its position #2 in the link diagram where the sun rises at “C” and sets at “D”. Move the same globe to its position #4. You can find the shift change in sunrise and sunset after copmaring position #2 and #4 i.e.
Sun rises at “C” and sets at “D”in position #2 while sun rises at “D” and sets at “C” in position #4108.173.128.208 (talk)EEK
- I'm not sure about your question but the picture is slightly wrong in that the sun is at one focus of the ellipse rather than at the centre of the ellipse. Also the earth moves fastest when it is closer to the sun so you can't split the ellipse into quarters of the time the way you do. See Kepler's laws of planetary motion for an illustration of this. Dmcq (talk) 09:19, 29 January 2013 (UTC)
- I agree the question is probably about Sidereal time. The earth does not go around once on its axis in 24 hours but in slightly less time - if we used the stars as our guide rather than the sun and so had shorter days a year would take one extra day. Dmcq (talk) 10:15, 29 January 2013 (UTC)
- I believe the OP's question is this: If at the summer solstice the sun rises at 5AM and sets at 7PM, then at the winter solstice why doesn't it rise at 7PM and set at 5AM, since we've gone halfway around the sun? And more generally, if at one point in Earth's orbit it rises at time A and sets at time B, then at the opposite point in its orbit (6 months later) why doesn't the sun rise at time B and set at time A?
- To elaborate on Dmcq's answer, I think we can say that the questioner would be more or less right if 24 hours was how long it takes the Earth to rotate completely once relative to the stars, so if you held the time of day constant (by stopping Earth from spinning relative to the stars) and moved Earth halfway around its solar orbit, you'd find yourself at the opposite point in the day/night cycle (but not exactly so since the orbit around the sun is not quite perfectly circular). But actually that one rotation relative to the stars takes only 23 hours, 56 minutes, and 4 seconds, whereas 24 hours is how long it takes on average for us to go from noon one day to noon the next day (i.e., one rotation relative to the sun), so that kind of shifting of the day/night clock-time ranges is prevented from occurring. Duoduoduo (talk) 18:15, 29 January 2013 (UTC)
It will just take a few seconds to understand the the crux of the problem if one start looking at all aforesaid positions of earth from sun angle/ top view of diagram for its all mornings and evenings. (please forget about the star or observing from other angle for a while)
All aforementioned pegs/ points posted at A, B, C and D on earth are on or close to its equator. 12 hrs (almost) days and nights on or close to the equator is ok if earth doesn't moves in its elliptical orbit around the sun but since it does move around the sun besides its rotating about its own axis therefore next sunrise will be on point A+1 not on A, similarly A+2, A+3 ……. and so on till the sunrise reached to sunset on point B (position#3) and finally to its original point A (position#1) after completion of its one cycle (one year) around the sun. I may repost my question withh rephrasing if difficult to understand 108.173.128.208 (talk) 22:33, 29 January 2013 (UTC)EEK
- Did you read the bit in the answers about that the earth rotates in 23 hours 56 minutes and 4 seconds approximately, not 24 hours? Dmcq (talk) 00:49, 30 January 2013 (UTC)
That 24hrs(approx) was just for simplicity! Since planets move fatser near the sun therefore such shift change can be observed more rapidely when earth moves close to the fuci /sun from one endpoint of latus rectum to other thru vertex in its ellipitical orbit. So in whch part of the year such rapid shift change (approx 12hrs) occur even if it completes its rotation about its axis earlier than 24hrs. — Preceding unsigned comment added by 108.173.128.208 (talk) 21:27, 30 January 2013 (UTC)
- Try multiplying the difference between 23 hours 56 minutes and 4 seconds, that is 3 minutes and 56 seconds, by half a year's worth of days and see how much it add up to. Dmcq (talk) 21:31, 30 January 2013 (UTC)
why isn't drinking yourself to death a more popular suicide option?
It seems kind of a peaceful way to die if you get your BAC over 0.50% ...there's also injecting oneself with alcohol too. I however don't hear a lot of people taking this route, but overdosing on pills seems quite popular instead, which generally isn't lethal. (This is for curiosity only.) — Preceding unsigned comment added by 71.207.151.227 (talk) 07:03, 29 January 2013 (UTC)
- Answering the "why" question is likely impossible. We can answer other questions, such as what methods are most common. Suicide#Methods has some numbers. --Jayron32 07:15, 29 January 2013 (UTC)
- At least in my case, drinking myself to death wouldn't work. If I drink too much, I barf it back out. I've never had a hangover, as a result. This seems like a rather sensible reaction to alcohol, and, based on the number of people who die from the chronic effects of alcohol, I'm surprised this evolutionary adaptation isn't more widespread. Perhaps the few thousand years we've been distilling alcohol isn't long enough.
- As for injecting alcohol, it would have to be fairly dilute, or it would be extremely painful. And, being dilute, you'd have to inject quite a bit. StuRat (talk) 07:31, 29 January 2013 (UTC)
- I think most suicide victims are looking for a technique with which they can quickly pass the point of no return. You could down a whole bottle of sleeping pills in seconds. Shooting yourself, or jumping would also be over in seconds, but drinking yourself to death would take a little while, and you could stop at any point.
- Besides, You'd hate to accidentally survive with permanent brain damage. APL (talk) 07:55, 29 January 2013 (UTC)
- According to List of preventable causes of death, 1.9 million deaths a year are attributable to alcohol.--Shantavira|feed me 08:22, 29 January 2013 (UTC)
- I don't think the majority of those are due to blood alcohol toxicity. Cirrhosis of the liver is not an effective method of suicide, and choking on one's own tongue or vomit is far too hit-and-miss. AlexTiefling (talk) 09:28, 29 January 2013 (UTC)
- According to List of preventable causes of death, 1.9 million deaths a year are attributable to alcohol.--Shantavira|feed me 08:22, 29 January 2013 (UTC)
- I can think of two good reasons, both because alcohol is like a very sloppy poison: before someone gets to the point of death from alcohol, they would either feel *much* better about things & decide not to go through with it (i'm sort of oversimplifying, of course); or they would pass out because the brain's survival mode would have kicked in -- unfortunatelyasphyxiation would then be possible but as AlexT says it's not guaranteed, especially since one's choking reflexes might be enough to temporarily wake up or turn to the side. El duderino (abides) 09:49, 29 January 2013 (UTC)
- I think the usual direction if any is from alcohol to suicide rather than the other way round. Dmcq (talk) 10:47, 29 January 2013 (UTC)
- I suspect there is a very good chance that most people wouldn't think of alcohol in that way...either they wouldn't realize that this is a possibility - or when thinking of the possible ways to terminate their lives, that one simply doesn't pop into their heads. SteveBaker (talk) 22:03, 29 January 2013 (UTC)
- Most healthy people not on other drugs who die from alcohol poisoning die from consuming it much too quickly, often from some or total lack of experience. There are plenty of cases of teens and twenty-one somethings in the US who do too many shots or jello shooters too quickly to feel the full effects before they get to the point of no return. The Callahan hazing in NJ in the 80's was a famous case, with him doing two dozen "kamikazes" in under a half hour. Of course Amy Winehouse manaegd it.μηδείς (talk) 22:22, 29 January 2013 (UTC)
Genus outbreeding
Can animals of the same genus breed with any of each other? Could, for example, an arctic fox and red fox successfully breed with each other? (I am not very familiar with biology.) --66.190.69.246 (talk) 08:36, 29 January 2013 (UTC)
- In the case of the arctic- and red foxes, they can interbreed, but their offspring is infertile (see this article, page 5 under the heading "Genetics"). Note that the arctic fox was previously placed in a different genus (Alopex), but is now included in Vulpes. There is no general rule about animals of the same genus. Sometimes it is possible, sometimes it isn't. In some cases the offspring is even fertile, and in that case the line between the species may become blurred. Keep in mind that biologists do not test all possible combinations of cross-breeding before classifying species. In the case of the dog and the wolf, they were formerly considered different species, but have since been renamed because have been shown not to be genetically isolated from each other (and they can interbreed without any limitations, see wolfdog). - Lindert (talk) 09:17, 29 January 2013 (UTC)
- See Hybrid (biology). Duoduoduo (talk) 17:47, 29 January 2013 (UTC)
A species is defined as a group of animals that can breed with each other successfully -- "successfully" meaning that the offspring are fertile themselves. So if animals are not in the same species, they cannot breed with each other successfully. They may be able to produce offspring, but the offspring will not be fertile. The caveat to this, as Lindert says above, is that biologists usually don't actually test for interbreeding before assigning a species designation
- Actually defining a species is a lot more complicated than that -- see Species problem. Duoduoduo (talk) 19:55, 29 January 2013 (UTC)
Aakash Institute
please show me result of Aakash Institute (ANTHE-2012) roll no. 36001133 — Preceding unsigned comment added by 164.100.194.115 (talk) 09:27, 29 January 2013 (UTC)
- Question heading added. AndrewWTaylor (talk) 09:38, 29 January 2013 (UTC)
- Go here, then enter the roll number and the letters shown to view the results. (I can't link to the results themselves). - Cucumber Mike (talk) 15:13, 29 January 2013 (UTC)
Planet or satellites with the direction revolution opposite to that of its rotation
Viewed from the (so called) 'top', Earth rotates anticlockwise on its axis and also orbits round the sun in the anticlockwise direction. Are there any planets or satellites in our solar system that rotate in one direction but orbit their primary in the opposite direction ? Is there any name for such motion? I am not talking about retrograde motion - WikiCheng | Talk 14:29, 29 January 2013 (UTC)
- Venus does exactly that. Uranus' axis is at a 90 degree angle to the ecliptic. Several of the gas giant moons have retrograde rotation too. Fgf10 (talk) 14:44, 29 January 2013 (UTC)
- Uranus axial tilt is actually close to 98 degrees which means that - like Venus - it has a retrograde rotation. Dauto (talk) 16:38, 29 January 2013 (UTC)
- Well, if you measure the axial tilt as 82 degrees, the planet has retrograde rotation; and if you define by sign-convention that the planet always has positive rotation, then it must have an axial tilt of 98 degrees... but this is really purely semantics about where you should place the negative-sign. We know which direction the planet rotates; we just usually happen to define the reference-direction to be oriented towards the "top" of the solar system. Here's my standard reference to de Pater and Lissauer, for the interested planetary scientist. Nimur (talk) 20:05, 29 January 2013 (UTC)
- What in the world are you talking about? The axial tilt (as usually defined) is about 98 degrees. Since that angle is larger than 90 degrees, to rotation is considered retrograde (as usually defined). There is only one negative sign and only one place for it to go (as things are usually defined). Dauto (talk) 21:18, 29 January 2013 (UTC)
- The angle between two non-directional lines has an ambiguity of ±180°. If both lines have an orientation, we can represent them as vectors and calculate a dot-product. In this case, we define the axial tilt as the angle between the planet's axis of revolution and the axis of rotation. The orientation of each axis is defined such that it satisfies a right-hand rule for the angular momentum; and therefore the axial tilt is greater than 90 degrees. The negative sign can be found in the equation of the dot-product by noting that A⋅B = AB cosθ = AB cos(-θ) = -ABcos(θ-180°). So, if you have a rotation dΦ/dt at an axial tilt θ, that is equivalent to a rotation -dΦ/dt around a flipped axis with tilt θ-180°. Nimur (talk) 22:02, 29 January 2013 (UTC)
- Uranus axial tilt is actually close to 98 degrees which means that - like Venus - it has a retrograde rotation. Dauto (talk) 16:38, 29 January 2013 (UTC)
Thanks! After reading your answer, I found that this has indeed been mentioned in the article retrograde motion !. Sorry for not reading it fully - WikiCheng | Talk 17:14, 29 January 2013 (UTC)
- Interesting. I also thought that the phrase retrograde motion implied apparent retrograde motion. Live and learn. -- ToE 17:57, 29 January 2013 (UTC)
Manometery equation
In a manometer, i have the reservoir pressure as 0.5 Bar, the height between the pipe and the datum line as 0.5m, an angle of 20 degrees, the relative density of the fluid in the pipe as 13.6.
From my understanding the pipe pressure when the fluid is at its datum line, should be the pressure in the reservoir minus the pressure in the liquid minus the pressure if the liquid.
I.e. 50000 - (800*9.81*0.5) which gives 46076 pascals but apparently the answer is 46.1 pascals. Where have I gone wrong? Clover345 (talk) 17:53, 29 January 2013 (UTC)
- Are you sure that the answer wasn't given in kilopascals? -- ToE 18:00, 29 January 2013 (UTC)
- My suggestion was based solely on the similarity of the numbers; I can't generate either their or your values from your statement of the problem. Someone else here might see what is going on, but if you'd like me to check your work, you'll need to explain the problem more fully.
- Do I understand correctly that your manometer's pipe is tilted at an angle of 20 degrees from the horizontal? Is the reservoir filled to the datum line? Is your "height between the pipe and the datum line as 0.5m" referring to the height of the fluid level in the pipe above the datum line, and by height do you mean vertical height, in which case the 20 degree angle is irrelevant, or is it the distance up the pipe along the 20 degree angle from the height of the datum line? Working backwards from your numbers, I couldn't figure out how you got the 800 from the density and the angle (the two values stated in the problem which do not appear elsewhere in your equation).
- Without understanding the problem further I can only say that it is either a typo or a poorly posed problem, because if the answer truly is 50,000 - 49,953.9 = 46.1 then the values in the statement of the problem were given with far too few significant figures. Given that the mantissa of your answer matches that in the answer key, you are probably correct and I just don't understand the full statement of the problem. -- ToE 19:17, 29 January 2013 (UTC)
- The other possible typo would be if the height of the liquid was 0.5mm rather than 0.5m. But I agree that it's most likely a typo in the answer that should have been in kilopascals...which is by far the more common unit for problems like this. SteveBaker (talk) 23:45, 29 January 2013 (UTC)
- Without a picture or more context, it's impossible to figure out. Could be an inclined tube manometer, the 13.6 suggests mercury, but where the 20 degree angle comes in, or the 800, I don't know. Like ToE, I can't make 800 from 13600 and 20 degrees. Ssscienccce (talk) 14:55, 30 January 2013 (UTC)
- The other possible typo would be if the height of the liquid was 0.5mm rather than 0.5m. But I agree that it's most likely a typo in the answer that should have been in kilopascals...which is by far the more common unit for problems like this. SteveBaker (talk) 23:45, 29 January 2013 (UTC)
No sound
What are the values of density and compressibility so that dispersal of the sound is worthless?--YanikB (talk) 19:11, 29 January 2013 (UTC)
- The best way for you to understand any answer to this question is to read about common equations that are used to model sound propagation, like the ones in our article on sound-speed for ideal gas. See how many parameters there are in there? See the ones that affect speed? And the ones that create anisotropic effects? And the ones that attenuate the signal? There is no specific set of parameters that makes sound "worthless;" that's not even a well-formed description of sound; but for example, you could contrive a parameter space and draw out the ranges of parameters that you care about, and (for example) graphically represent the region of interest where you would expect a sound-wave could propagate over a unit-distance at a reference sound intensity. You could look at the set of all input-parameters that satisfy that criteria. This is what a physicist would call a configuration space. This style of thinking, and the mathematical and visualization tools that accompany it, helps physicists work with multidimensional problems (like the way various gas parameters affect sound-propagation in 3 dimensions). Nimur (talk) 19:53, 29 January 2013 (UTC)
- OK, but in practice. There is no sound on the moon. Right? Then at what altitude this appen? --YanikB (talk) 21:35, 29 January 2013 (UTC)— Preceding unsigned comment added by YanikB (talk • contribs) 21:34, 29 January 2013 (UTC)
- It wouldn't be at any definite altitude - it would just gradually get less and less. SteveBaker (talk) 21:58, 29 January 2013 (UTC)
- It's very approximately where the wavelength of the sound wave is the same as the mean free path of the gas molecules. Already when the mean free path is still somewhat smaller than the wavelength, the waves will be distorted. By the way, the same applies to high-frequency (> 1 GHz) sound at sea level. Icek (talk) 01:31, 30 January 2013 (UTC)
- That can't be it either - it's the mean free path. Meaning the distance that a molecule can travel before hitting another one on average. Below that pressure, there will be plenty of molecules that do hit another one within one wavelength - and above that pressure, there will be plenty that don't. So this isn't any kind of a limit. My previous answer holds. There is no specific pressure/altitude at which sound disappears. SteveBaker (talk) 14:26, 30 January 2013 (UTC)
- How can it be approximative. No body mesured it ?--YanikB (talk) 03:39, 30 January 2013 (UTC)
- Would you prefer if we said "at 7 km altitude, sound becomes worthless"? We could equally well say that at 4 km altitude, sound is worthless, and at 400,000 km away from Earth's surface, sound is even more worthless. Much of the sound produced at sea-level is also worthless. This just isn't a useful way to describe that the effect varies continuously. For any given altitude, there is attenuation and other non-ideal effects on sound-wave propagation. There is not a specific altitude or pressure beyond which sound ceases to propagate.
- But, because you want a specific answer, and because I am an enthusiast for engineering approximations, especially ones passed down from old fogeys whose experience pre-dates modern digital calculators, I use the easy rule of thumb that we lose one inch of pressure for each 1000 feet above sea-level. And because Earth's atmosphere at sea-level is conveniently at a pressure of ... one atmosphere, or approximately exactly thirty inches of mercury, that means that the "top of the atmosphere" is approximately exactly 30,000 feet, and if we could find some way to stand at that altitude, we'd expect to measure exactly zero inches pressure there. Above this, "there is no air." Of course, this is a ridiculous over-simplification of the way things really are; we can't apply the rule-of-thumb with any accuracy beyond a few thousand feet altitude; if anything, this is a perfect example of why we can't use overly-simplistic models, because they break down in the limit case. Nonetheless, if you've ever popped open the door or window on an airliner as it cruised along at 30,000 feet, you might conclude that there's "very little" air outside; and chances are very good that if you screamed as you fell out of the jet, nobody would hear you. So the engineering approximation works anyway, insofar as it predicts the first-order effect of the non-propagation of sound at 30,000 feet.
- Now, as any atmospheric physicist worth their salt will tell you, there are a few distinct boundary altitudes - the tropopause and stratopause and mesopause - whose altitude varies on an hour-by-hour basis - corresponding to some abrupt change in one or more properties of the atmosphere. But, it would be disingenuous to imply that these boundaries represent dividing lines between regions where sound does- and does-not propagate. Nimur (talk) 06:18, 30 January 2013 (UTC)
- The actual pressure at 30,000 feet is about 30 kPa, 30% of that at sea level. The Extravehicular Mobility Unit spaces suits used on the ISS are pressurized to that same level during space walks, and sound seems to carry well enough to their microphones that the astronauts' voices don't sound even distorted to my ear. This would suggest that the above rule of thumb breaks down early enough that it does not inform the original question. -- ToE 09:53, 30 January 2013 (UTC)
- What Nimur said about the question not having much meaning, because in this use the word "worthless" is indefinable, is never the less, true. I suggest that the question cannot be answered precisely, and so has been over-answered with detail the OP almost certainly does not want. In terms of a human utilising sound, that peters out at relatively low altitude for various reasons. Sound clearly does propagate well at 30,000 feet, because you can hear the engines of cruising airliners at ground level. Physicists theorise that acoustic shock waves can travel in deep space (it is not a perfect vacuum), but clearly that is of no practical use. Personally, I think the sound emitted by punk rock bands is worthless - others may disagree. It is up to the OP to define "worthless". Conversation not possible?? The height at which a rocket engine become inaudible to observers on the ground?? Wickwack 120.145.32.123 (talk) 13:51, 30 January 2013 (UTC)
- Yes, exactly. For example, this article talks about measuring the shape of the universe by measuring sound waves propagating from the Big Bang in near (but not total) vacuum. The "mean free path" argument is circumvented by the extremely low frequency of the sound waves. SteveBaker (talk) 16:27, 30 January 2013 (UTC)
- What Nimur said about the question not having much meaning, because in this use the word "worthless" is indefinable, is never the less, true. I suggest that the question cannot be answered precisely, and so has been over-answered with detail the OP almost certainly does not want. In terms of a human utilising sound, that peters out at relatively low altitude for various reasons. Sound clearly does propagate well at 30,000 feet, because you can hear the engines of cruising airliners at ground level. Physicists theorise that acoustic shock waves can travel in deep space (it is not a perfect vacuum), but clearly that is of no practical use. Personally, I think the sound emitted by punk rock bands is worthless - others may disagree. It is up to the OP to define "worthless". Conversation not possible?? The height at which a rocket engine become inaudible to observers on the ground?? Wickwack 120.145.32.123 (talk) 13:51, 30 January 2013 (UTC)
- The actual pressure at 30,000 feet is about 30 kPa, 30% of that at sea level. The Extravehicular Mobility Unit spaces suits used on the ISS are pressurized to that same level during space walks, and sound seems to carry well enough to their microphones that the astronauts' voices don't sound even distorted to my ear. This would suggest that the above rule of thumb breaks down early enough that it does not inform the original question. -- ToE 09:53, 30 January 2013 (UTC)
- OK, but in practice. There is no sound on the moon. Right? Then at what altitude this appen? --YanikB (talk) 21:35, 29 January 2013 (UTC)— Preceding unsigned comment added by YanikB (talk • contribs) 21:34, 29 January 2013 (UTC)
music and heat
I notice when Im listening to very loud music that I particularly enjoy my body temperature seems to rise to the point of sweat. this even happens during the colder winter months. Could the sound energy be potentially creating the heat energy. or is it more of a biological nature. --86.45.153.90 (talk) 21:05, 29 January 2013 (UTC)
- It's biological. The amount of actual energy carried by sound waves is very, very small unless the sound is beyond deafening. Dragons flight (talk) 21:17, 29 January 2013 (UTC)
- Also, a "40 watt" speaker system typically can't go above 20 watts before distorting, and that's the reputable ones. The makers of cheaper systems tell bigger lies. When playing music, even a true 40 watt system typically doesn't send more than a watt of average power to the speaker. (compare how hot a 40 watt speaker gets compared to a 40 watt light bulb in the same enclosure) Far less than that makes it to your body. --Guy Macon (talk) 07:22, 30 January 2013 (UTC)
- Good point, although the waste heat will eventually warm the room somewhat and thus heat your body, if you remain there long enough, although some of it will also escape the room, and might affect the thermostat, if either a furnace or A/C is on. But these are all quite minor effects, to be sure. StuRat (talk) 19:25, 30 January 2013 (UTC)
- If it's only music that you enjoy that does that, we can certainly eliminate all sources of heat except for the psychologically driven sources within your body. Much of the body's heat comes from organs like the heart and brain - so if really good music gets your mind racing and your heart beating faster - that might maybe do it. I doubt that your body temperature is actually rising though - the whole point of the sweating is to prevent that from happening. SteveBaker (talk) 23:42, 29 January 2013 (UTC)
- I don't doubt that you enjoy the music per se. But maybe you actually intensely dislike the volume at which you're playing it, but you've become so used to playing and hearing music at such absurd volumes that it never occurs to your conscious mind to turn it down a notch or ten, but your subsconscious mind knows exactly what's ideal for you and it is rebelling for all its worth and is sending you a message. Maybe. -- Jack of Oz [Talk] 23:55, 29 January 2013 (UTC)
- Well, unless the OP is playing enjoyable music at a much louder volume than less enjoyable stuff - this doesn't explain the symptoms described. SteveBaker (talk) 14:20, 30 January 2013 (UTC)
January 30
Existance of solid state variable inductors?
Is there such thing as a non-coil, variable inductor? If so, what sort of frequency ranges are possible with a given unit? Also, can one be constructed from any of the other common electronic components (or some combination thereof) by exploiting parasitic capacitance? 75.196.240.108 (talk) 02:41, 30 January 2013 (UTC)
- Yes. Inductance can be synthesised by incorporating capacitance in a feedback network. By the circuit technique known as gyration, very high Q inductance can be created at audio and low RF frequencies. The use of shunt negative feedback around and amplifier lowers output impedance. If capacitance is used to reduce the loop gain as fequency increases, the output impedance must then rise with frequency - this is the property of inductance. In both gyrators and negative feedback amplifiers, the loop gain can be adjusted by means of a potentiometer, or by a control voltage. Hence the resulting inductance is varied. Using a gyrator, it is thus readily possible to make a very high Q variable inductor, outperforming a wound component at selected frequencies. A very good gyrator for audio and low RF can be constructed with a couple of op-amps. By using discrete transistors the range can be extended to high frequencies. Gyrator integrated circuits are available.
- See http://en.wikipedia.org/wiki/Gyrator, however please note that the example single op-amp circuit presented there offers rather poor performance.
- Similarly, transit time in transistors causes practical amplifiers to drop in gain at high frequencies. Hence, a feedback network can make inductance without any coil and without even any capacitance. However, the stability, accuracy, and quality of such inductance is usually pretty poor.
- Keit 121.215.78.229 (talk) 03:39, 30 January 2013 (UTC)
Menopause for trees
Are there any fruit trees that stop producing fruits after X number of years? Someone described this "menopause for trees" phenomenon to me and I'm very dubious of its veracity.Dncsky (talk) 08:36, 30 January 2013 (UTC)
- A Google search on "years old and still producing fruit" (with the quotes) turns up enough examples that I doubt that it is true in the general case. There may very well be individual species that do. --Guy Macon (talk) 10:55, 30 January 2013 (UTC)
- You might like to read How grandma's apple tree shook experts to the core about an apple tree that is thought to have been planted in 1806 and is still producing apples of a type unknown to the National Fruit Collection. Alansplodge (talk) 11:24, 30 January 2013 (UTC)
- I think the answers above are far too hasty, and are focused on rare exceptions rather than the general trend. It is a little sloppy to call it "menopause for trees", but the analogy isn't terrible... see plant senescence for starters. Trees do age, and fruit yield does tend to decline with age. For instance, this extension service page indicates stone fruit trees only viably fruit for 15-20 years: [7]. Another good source (USA govt. report here: [8]) gives a "useful life" of fruit trees as 16 years (peach) to 37 years (almond). What you are really looking for is an "age-yield relationship", and there is much research into this topic if you want to dive into some specifics.
- In short, many species of commercial interest do show declining fruit yield with age. How fast the drop-off is, and how long it takes can vary wildly by species. SemanticMantis (talk) 17:09, 30 January 2013 (UTC)
Thanks a lot, everyone. Dncsky (talk) 21:38, 30 January 2013 (UTC)
Menopause and senescence are not the same thing. Menopause is the gap between reproductive lifespan and actual lifespan. In most animals those are close together, but they have become separated in humans and the gap is progressively lengthening. The adaptation value of menopause has been debated for decades. One of the simplest hypotheses is that the extra years improve the likelihood of reproduction grandchildren, though evidence is mixed. See life history theory. If there is a gap in plants between potential age until death by senescence and an age at which the plant no longer can reproduce itself, it would be fair to compare it to menopause. But it makes no sense and is not useful to simply consider menopause the same as human aging. alteripse (talk) 21:41, 30 January 2013 (UTC)
Name of the diagnostic test shown in the movie Exorcist
What is the name of the diagnostic test performed on Regan MacNeil in the movie The Exorcist (film)? She was undergoing two different tests: one is Pneumoencephalography and what is the other test where the doctors injecting some dye in her neck? --PlanetEditor (talk) 13:10, 30 January 2013 (UTC)
- Does anyone know? --PlanetEditor (talk) 15:18, 30 January 2013 (UTC)
- I'd leave it at least 24 hours (rather than just over two, in this case) before prompting people to respond. Everyone's in different time zones, and has different work commitments and activity cycles. Please just be patient, and I'm sure that if there is an answer, someone will provide it. AlexTiefling (talk) 15:27, 30 January 2013 (UTC)
- Maybe cerebral angiography. Sean.hoyland - talk 15:43, 30 January 2013 (UTC)
- It will also help if you can specify at what minute of the film these events occur so people don't have to watch the whole film over again. μηδείς (talk) 18:30, 30 January 2013 (UTC)
- 43-46 minutes. --PlanetEditor (talk) 19:15, 30 January 2013 (UTC)
- I think the movie called one of the procedures a "spinal tap". StuRat (talk) 19:21, 30 January 2013 (UTC)
Good answers. Pneumoencephalography was a very unpleasant, very low-yield test for brain tumors that was devised in the early 20th century and replaced by the CT scan and MRI in the 1970s. Angiography is still done to look for aneurysms or other interruptions of brain circulation. In the 1970s it was also done to look for brain tumors but these days CT and MRI are superior for that purpose. It involves injecting dye into an artery, not into the spine. A spinal tap is a lumbar puncture and is always done with a long straight needled in the lower back to obtain cerebrospinal fluid without being high enough to risk puncturing the spinal cord itself. I dont remember the scene in the movie, but if it was lower back it was a spinal tap, and if higher up it was angiography. If they made the movie now, they would probably depict a neurometabolic PET scan, which shows differential activity of separate parts of the brain. alteripse (talk) 21:29, 30 January 2013 (UTC)
Dynamics of a planetoid being captured by a star
A rogue planetoid from interstellar space comes from the left (as viewed from a point outside the plane of the path and the star) into the vicinity of a star, but below the star as seen by the viewer. (Or it could be a soon-to-be-moon approaching a large planet.) The star's gravity pulls the planetoid out of its linear path and captures it into orbit. The planetoid swings around above the star and back toward the left, then curves back around to the right, etc.
(1) Is the orbit exactly elliptical as soon as one orbit has been completed?
(2) If not, then how many orbits does it take to become within epsilon in some sense of being elliptical?
(3) How is the orbit's eventual elliptical eccentricity determined by the ratio of the star's mass to the planetoid's mass, the amount by which the planetoid would have missed the star in the absence of the star's gravity, and the planetoid's initial velocity?
(4) Which focus of the ellipse is the star at -- the one nearer where the planetoid came from, or the farther one? Duoduoduo (talk) 19:03, 30 January 2013 (UTC)
- An object not in orbit will not enter an orbit unless there is some change in momentum. So for example an object alling from infinity to near the sun will follow a parabola shape. If the object was moving towards the sun already, it would follow a hyperbola shape. There would have to be some interaction, such as tidal, or an encounter with another planet to reduce the rogue objects speed so that it can slow to enter the orbit. As this change happens the shape of orbit will transition to an ellipse. It could also interact and not be slowed to a non escape velocity, then it would continue on a different hyperbolic trajectory. The star would be at the focus near where the rogue object is moving the fastest. For (3) this would be nothing to do with mass ratio, and since gravitational capture will not happen without something else happening, it depends on what else, such as planets, is going on. Graeme Bartlett (talk) 21:08, 30 January 2013 (UTC)
- The planetoid may, for example, nail a small asteroid (one that's orbiting the star) and lose enough momentum in the process. It would then run a risk of getting stuck with the star, as a captured asteroid. Or there could be torque transfer due to a gravitational swing-by with a gas giant - which could suffice if the encounter is close enough and/or the planetoid came in slowly enough. - ¡Ouch! (hurt me / more pain) 21:36, 30 January 2013 (UTC)
Thanks. So when I read about various natural satellites having been probably captured by the planet, it's always the case that there had to be some third body involved? What would that have been -- another existing moon? Duoduoduo (talk) 22:00, 30 January 2013 (UTC)
- In theory, it could lose momentum with a grazing hit to the planet. That only requires two bodies. Very unlikely, of course. Or it can lose momentum by plowing through some gas. --Guy Macon (talk) 02:32, 31 January 2013 (UTC)
National Geographic's raving review of biochar
According to [9], small cookstoves made out of ordinary buckets that make biochar are the hottest new thing in Kenya. They cook food without killing people with smoke said to be the greatest environmental hazard in terms of deaths; they produce biochar as useful fertilizer; they can run on anything; they even say that "A family cooking a pot of beans will use 40 percent less wood with the Estufa Finca than with an open-fire stove, said SeaChar President Art Donnelly, who designed the stove."
Now I don't know about you, but I'm getting high on the fumes here. Surely this stuff can't all be true - higher efficiency, wider fuel range, less fuel consumption, and leftover reduced carbon - simply by rearranging your cooker a little to exclude oxygen from a fire? Can it...? Wnt (talk) 20:02, 30 January 2013 (UTC)
- I agree with your skepticism. One error I suspect they make is in saying something like "One kg of fuel burned openly produces X amount of unwanted byproducts, while one kg burned this way produces less". They should compare the waste per energy produced for cooking, not per kg of fuel. If you use less oxygen and thus get incomplete combustion, then you produce less energy. If they then increased the amount of fuel burned as biochar, in order to produce the same amount of energy for cooking, I suspect the advantages would largely disappear. StuRat (talk) 20:15, 30 January 2013 (UTC)
- I don't think it's about reducing the oxygen, it's not letting the oxygen reach the material itself but only burn the fumes from pyrolysis: they are basically making charcoal, heating the wood and burning the gas produced, which is a mixture of carbon monoxide, hydrogen and other stuff. It's how coal gas was made, and in the second world war they even drove cars with wood gas generators. I don't doubt it's an efficient stove, but I personally wouldn't put that biochar in the ground. That charcoal can be burned in a normal stove, or a barbeque or whatever. Would take away the "green" aspect I guess. Ssscienccce (talk) 21:34, 30 January 2013 (UTC)
- Leftover carbon and reduced O2 means it is operating at lower efficiency. However, I suspect that is (or at least could be) a good thing, in that it locks up carbon in long-residence time soil carbon, rather than dumping it all into the air with a normally aspirated wood stove. Perhaps they are oversimplifying the results of a life-cycle analysis? That would allow them to sweep "equivalent" units under the rug, e.g. "less fuel" means more fuel, but the wider range means that a lot of it would not have been considered fuel for an ordinary stove, so they can call it "less" in some sense. SemanticMantis (talk) 21:23, 30 January 2013 (UTC)
measuring decarboxylase activity
What are simple ways to measure the CO2 output of a decarboxylase enzyme, without having to use pyruvate as a reagent (preferably not NAD+ or NADH, due to interference)? How would air-free UV absorbance work? 137.54.28.86 (talk) 20:13, 30 January 2013 (UTC)
- Well, the most interesting (and easy for the consumer) would be a variant of [10] - i.e., under the right circumstances, to look at gas evolution. But you still need some reagent (some source of the CO2 to be emitted) if you want to actually output CO2, and which reagent would depend on exactly which enzyme from which source. Wnt (talk) 20:19, 30 January 2013 (UTC)
- Yes, I have the CO2 source (alanine decarboxylase) covered. This is for a upper-level biochemistry laboratory where we have to come up with our own protocols for assaying the activity of an unstudied enzyme (a putative valine-pyruvate transaminase). 137.54.28.86 (talk) 21:01, 30 January 2013 (UTC)
- Well, let's riddle out the question here. There's an l-alanine decarboxylase in Camellia sinensis (tea).[11] Alanine is CH3CH(NH2)COOH versus pyruvate CH3COCOOH. So presumably you're aminating pyruvate using valine as the ammonia source, creating alanine, which is decarboxylated to, I suppose, ethylamine? Wnt (talk) 21:24, 30 January 2013 (UTC)
Sick
close request for medical advice |
---|
The following discussion has been closed. Please do not modify it. |
I'm sixteen and has only been sick 4 times in the last 10 times, and there has always only lasted a few days, and intervals of several years. The rest, of my family is just as often sick as usual. Someone who has an idea why? -- 80.161.143.239 20:52, 30 January 2013 (UTC)
Please seek medical attention. The OP should do the same. μηδείς (talk) 03:57, 31 January 2013 (UTC) |
13.7 billion years
If the universe is 13.7 billion years old, does that mean that the farthest thing away from you is 13.7 billion light years away? 203.112.82.129 (talk) 21:53, 30 January 2013 (UTC)
- No, that would only be true if you are at the center of the universe. (You can't be, since I am) ;) ~:74.60.29.141 (talk) 22:07, 30 January 2013 (UTC)
- Due to the metric expansion of space, objects can be farther than 13.7 bn ly away. We note at size of the universe that the theoretically observable universe is a sphere roughly 47 bn ly in radius. Note further that the universe may be infinite in size, and thus objects (though not observable objects) could be infinitely far away. However, the "center of the universe" joke above is completely wrong. Measurements from any point in the universe are consistent with that point being the "center". — Lomn 22:13, 30 January 2013 (UTC)
- Right, you are at the center of the universe, and so am I! SemanticMantis (talk) 22:17, 30 January 2013 (UTC)
- How far then can be the farthest distance from me? 203.112.82.2 (talk) 22:19, 30 January 2013 (UTC)
- Now, as far as "the farthest thing that we actually have observed" goes, the answer is about 13.4 billion light years. My understanding, though, is that those sort of distance records are corrected for metric expansion, so that it's really more a statement of "we see something 13.4 billion years old" as opposed to "we see something currently 13.4 billion light years away". For "we see things X years old", 13.7 billion really is the limit. The cosmic microwave background radiation dates from about 300,000 years after the big bang, and prior to that point, the universe was opaque to EM radiation. We can see the CMBR, but nothing prior. — Lomn 01:59, 31 January 2013 (UTC)
- This is rather simple. If the early universe had been transparent, the oldest light you could see would be 13.7 billion years old. The objects that had transmitted that oldest light would now be even further away, due to the metric expansion mentioned above. See redshift. But the early universe was opaque, see cosmic background radiation. So you cannot see 13.7 billion years back, nor objects that are that far away plus metric expansion. μηδείς (talk) 03:54, 31 January 2013 (UTC)
I asked this but maybe wasn't clear enough...
I asked this but maybe wasn't clear enough. The human eyes take depth cues in the near field from how far they have to converge (e.g. toward something near your nose). This angle is used by the brain. The 3D cameras I've heard of that use stereoscopy, however, remain fixed in a plane. Why is this? Is there some theoretical reason the cameras shouldn't be on fine servos and also turn, using this convergence information as well? Thanks! 178.48.114.143 (talk) 00:58, 31 January 2013 (UTC)
- Not so much a theoretical reason as a practical reason. The eyes have a region of high acuity -- the fovea -- that is quite small, only about 5 degrees across, actually even less for the highest-acuity portion. This makes convergence necessary in order to get both foveas pointed toward a target of interest. 3D cameras, as I understand it, use CCD arrays that have essentially equal resolution across a much larger portion of space. Looie496 (talk) 01:07, 31 January 2013 (UTC)
- OK, but are you saying it wouldn't even help? If we took a normal 3D camera and was trying to get the depth image in the near field, it wouldn't help our algorithm if we had the right to pick any point, and both cameras would pivot so that point is in the center of their vision, and tell you that angle. That doesn't even help? The algorithm - any algorithm - must be just as happy without that ability? It doesn't give additional information? 178.48.114.143 (talk) 01:17, 31 January 2013 (UTC)
- It might help, but adding more moving parts makes any system less reliable, so there would be a definite downside. You might compare with flight, where bird wings have changeable shapes, and, with a few exceptions, airplanes don't. It does help to have a changeable shape, but the additional complexity brings in new risks. StuRat (talk) 01:22, 31 January 2013 (UTC)
- Okay. Do you think if you have theoretically 'perfect' servos that turn very, very slowly but with complete accuracy..then in this case how much "more detail" (theoretically) can we gain? For example, if parallel lenses can resolve to 1 mm accuracy at a 10 centimeter distance, then would adding an exact angle to converge on a pixel (and still knowing distance between lenses) increase this to 0.1 mm or anything like that? Sorry, I don't know that much about optics, just curious! I'm curious about the theoretics here. 178.48.114.143 (talk) 02:14, 31 January 2013 (UTC)
- No change in resolution. All you are doing is cutting off the right side of the picture and adding to the left and thus making part of the image useless for 3D. Unlike an eye, with a camera there is nothing magical about the center spot. --Guy Macon (talk) 02:43, 31 January 2013 (UTC)
- Are you sure? I mean let's imagine that something is directly in front of the left camera, like 10 cm away. Then the right camera has to turn, say, 45 degrees (if it's also 10 cm away): this "45 degrees" would tell you that it's exactly 10 cm away, each cm farther makes the right camera turn a bit less, each cm closer makes it turn a bit more. How are you so sure that ALL of that information is in the basic stereoscopic image without any help from the convergence? I mean I guess I'm asking for kind of like a mathematical argument as to why the convergence wouldn't contain more information... Thanks. 178.48.114.143 (talk) 02:51, 31 January 2013 (UTC)
- I know because optics doesn't work that way. You think that there is some magical property called "convergence" that happens when you change where the camera points, but no such property exists. Similar properties exist, but none of them change when you change where the camera points. Look at any photo. Is the resolution better at the center than it is off-center? No? Then why do you imagine that it would be different if motors aimed the camera? --Guy Macon (talk) 03:01, 31 January 2013 (UTC)
- Hey, sorry if I was unclear. I'm not interested in image quality, but only the quality of "depth" information. I know that my own eyes give a FAR stronger and more accurate depth reading closer to the eyes. But that could be for several reasons, including making use of focus. Is the fact that my eyes converge one of these strong signals or not? I mean, for a depth reading, let's say we are reading a point that is directly in front of the left camera. The basic information is like this [ x ] on the left camera and [x ] on the right camera. How do you know that that is JUST as much information about the depth location of X as if rather than just two bitmaps, we also had an exact angle imperially arrived at, by the two cameras swivelling toward each point they're depth-gauging, and keeping track of the convergent angle? I understand that you are saying there is no extra information there. But could you give a mathematical argument as to why? THa kyou. 178.48.114.143 (talk) 03:45, 31 January 2013 (UTC)
- I know because optics doesn't work that way. You think that there is some magical property called "convergence" that happens when you change where the camera points, but no such property exists. Similar properties exist, but none of them change when you change where the camera points. Look at any photo. Is the resolution better at the center than it is off-center? No? Then why do you imagine that it would be different if motors aimed the camera? --Guy Macon (talk) 03:01, 31 January 2013 (UTC)
- Are you sure? I mean let's imagine that something is directly in front of the left camera, like 10 cm away. Then the right camera has to turn, say, 45 degrees (if it's also 10 cm away): this "45 degrees" would tell you that it's exactly 10 cm away, each cm farther makes the right camera turn a bit less, each cm closer makes it turn a bit more. How are you so sure that ALL of that information is in the basic stereoscopic image without any help from the convergence? I mean I guess I'm asking for kind of like a mathematical argument as to why the convergence wouldn't contain more information... Thanks. 178.48.114.143 (talk) 02:51, 31 January 2013 (UTC)
- No change in resolution. All you are doing is cutting off the right side of the picture and adding to the left and thus making part of the image useless for 3D. Unlike an eye, with a camera there is nothing magical about the center spot. --Guy Macon (talk) 02:43, 31 January 2013 (UTC)
- Okay. Do you think if you have theoretically 'perfect' servos that turn very, very slowly but with complete accuracy..then in this case how much "more detail" (theoretically) can we gain? For example, if parallel lenses can resolve to 1 mm accuracy at a 10 centimeter distance, then would adding an exact angle to converge on a pixel (and still knowing distance between lenses) increase this to 0.1 mm or anything like that? Sorry, I don't know that much about optics, just curious! I'm curious about the theoretics here. 178.48.114.143 (talk) 02:14, 31 January 2013 (UTC)
I feel like I'm programmed to be racist
This is not an internet forum |
---|
The following discussion has been closed. Please do not modify it. |
So, today a Pakistani guy I went through school with sent me a text saying he was in town, and we arranged to meet up for a chat about life and old times. 15 minutes later I arrived in the city center and met him. He was with a large group of Pakistani and Indian guys, and I instantly felt uneasy, or suspicious or something. I don't even know how to describe it. A negative feeling telling me to beware of these people. I was reluctant to approach them or to appear part of their group, even though they greeted me warmly and made some short small talk. I was kicking myself inside and telling myself these are just more of my fellow human beings. Luckily they were going a different way and my friend left them and came with me into a cafe for a chat. Turns out his group of friends were a college cricket team. I commonly feel this way around people who are not white, even close friends. I suppress this emotion because I am against racism, but I am intruiged that I feel it. I don't feel this way with groups of white people, I am white. I don't even feel it around even the most foreign white people, from eastern Europe or Russia or wherever. Can anyone explain this obviously biological basis for racism to me? I have heard that people with recessive appearance traits are more prone to racism. I have many of those, blue eyes for example.--Whichwayto (talk) 01:01, 31 January 2013 (UTC)
This is the wikipedia reference desk. We can't comment on your personal feelings. μηδείς (talk) 03:48, 31 January 2013 (UTC) |
RF interference - update
About 2 weeks ago I was asking about interference between a wireless mike and some wireless devices. It turned out that one of the other devices had gone bad - not interference from he mike. Bubba73 You talkin' to me? 03:12, 31 January 2013 (UTC)