The Idea Box
August 16, 2013 12:00 AM
Imagine an AI agent, released freely onto the internet, that performs crime autonomously. The beneficiary of this crime has plausible deniability and no direct responsibility for or direct control over the criminal acts. But the agent is designed with propensities intended to shape world to the human criminal’s benefit. And perhaps they maintain a well-hidden kill switch. As autonomous artificially intelligent agents advance in capability, their ability to perform acts with ethical and criminal valence, and the thread of responsibility thereof, becomes important. The legal system is riven with assumptions about human psychological dynamics; these will have to be rethought as nonhuman actors develop partial autonomy – to what extent are their behaviors foreseeable be their creators?
March 3, 2013 12:00 AM
In Syria, the U.S. is hesitant to arm the rebels, since the weapons might be redirected against our interests. I imagine of particular concern are anti-aircraft weapons, which could be misused to attack civilian aviation in other countries. Could such weapons be engineered with GPS interlocks, such that they would not function outside of a defined geographical area? The interlock would need to be engineered deep into the structure of the weapon, such that removing the interlock disables the weapon. For modern heavily electronic weapons this should be possible.
March 3, 2013 12:00 AM
Lately I've been thinking about strategies of large-scale cheating in large classes (don't ask). MOOCs have novel needs in this regard, which are being served by remote proctoring services. But just as education can be technologically enabled at mass scale by MOOCs, so can cheating. Imagine a test-assist service (let's call it "remote tutoring" for marketing purposes) where the cheater screen shares the online exam with an answer-providing confederate, and the confederate delivers answers on the screen of a cellphone that is place just behind the webcam that is being used by the remote proctoring agency to monitor the exam. (Ideally, the cellphone is taped to the computer screen itself, so that sight lines to the phone point fall within the main screen). With a bit of infrastructure work by the cheating company, they could deliver answers directly from a question database built up over time from prior exams, so that the cheating is as scalable and inexpensive as the MOOC itself. Such a service would of course be hosted in a country with weak rule of law and would provide service at a very reasonable price. It will take some time for the online cheating market to develop, but once it does, would this mode of cheating make online at-home secure exams impossible, at least without demanding root access to the user's home computer? Fully secure MOOC exams might require a distributed network of in-person testing centers, which sadly limits the ability to reach underdeveloped areas with credit-bearing courses.
January 31, 2013 12:00 AM
The story of an inadvertent controlled experiment in correlations of student performance with teacher evaluation. Alice recently taught two out of four large lecture sections of an introductory science course on a challenging topic. Alice tries to do something different each semester to improve the course. This time, she pared back extraneous details and focussed the lectures more on understanding over formula-crunching. She integrated breaks where students worked through sample questions in class, and asked them to discuss these questions with their neighbors. She conveyed high expectations. The other instructor, Bob, followed a traditional lecture style. He felt that some students would succeed, some would fail; there wasn’t much he could do about it, so he would deliver to the students what they wanted: sample exam problems in each lecture. The four sections of the course were identical in all aspects other than lecture: shared homework, labs, recitations, exams, review sessions. Each lecturer contributed half of the questions on each exam. Alice asked the course administrator to track student performance on the exams by lecture section, to see if the new teaching techniques had any effect. Alice was also curious about the end-of-semester student evaluations. Let’s start with those. Alice was hammered in the student ratings of teaching effectiveness, by far the lowest ratings of any course she had taught (including this same course in the past). Bob’s ratings were fine – a full two points on a six point scale separated them. Beyond raw hatred and anger, Alice’s students overwhelmingly expressed a single sentiment: It was grossly unfair that Bob’s students received sample exam questions in every lecture and they did not. Alice’s students felt strongly that they were not properly prepared for the exams because they were not taught how to work through each kind of problem. In reality? Both of Alice’s lecture sections performed better than both of Bob’s sections on every exam. The difference increased as the semester proceeded. Roughly one third of Alice’s students moved up one letter grade increment as a result. Alice considered possible confounding effects: Do better students tend to choose lectures at a particular time of day? Possibly, but colleagues who have studied the effect said that the time-of-day trend tends to be opposite to what Alice and Bob observed, and there was also no significant time-based sub-trend between the two separate sections that each taught. Did a subpopulation of Alice’s students attend Bob’s lectures? Mixing between the sections would generally suppress differences in performance, not accentuate or reverse them*. Otherwise, this was like a controlled experiment: all course elements beyond lecture were identical. The students punished Alice for lowering their exam scores, when in reality she raised them. How much better might Alice’s students have done had they recognized and engaged with what was working? Alice’s Dean likes to say that the student evaluations are an imperfect measure of teacher performance. But an imperfect measure is one with a modest positive correlation to the target characteristic, not a negative correlation. Alice notes that the university systematically measures student opinion across course and semester, but does not systematically measure student learning. An institution will reward what it can recognize, but will drift otherwise. The hatred and anger continue into the followup course in the next semester (with a different co-lecturer), as does the improved performance of Alice’s students. The numerical average of the student evaluations continues to be poor, but the student comments now also contain some lengthy positive statements, several of which note with dismay the attitudes and practices of other students. What attitudes and practices? For many, this required course is perceived mainly as a barrier against the student gaining admission to his or her desired major: a mechanical hurdle, not an intellectual challenge. Internet discussion groups and commercial websites deliver homework solutions. The “response function” to homework is negative: more challenging homework means more cheating and less total student effort. Local tutoring services deliver plug-and-chug training to muddle through exams. And old sample exams. No-one wants to think of their own actions as illegitimate. But cheating on homework can be made legitimate, in a student’s mind, if the course can be made illegitimate. The motto of the students’ private Facebook group, joined by a majority of the class: “Seriously, f--- this course”. A psychological challenge to the students’ delegitimization of a course – through an instructor’s high expectations for learning – can elicit a forceful defensive emotional response, to push the illegitimacy back onto the course. Anti-correlation. *Assuming that the two sets of lectures initially have similar student populations, whatever subgroup moves between lectures can be matched against an existing equivalent subgroup in the destination lecture that receives the same “dose” of lecture. The combination of both subgroups then contributes nothing to between-lecturer differences in student performance.
January 15, 2013 12:00 AM
Referee reporting is an under-recognized service towards the community. I'd like to use this post to thank specifically those hero referees who have handled three or more manuscripts for AIP Advances in the past year: Wu, Tom Gambino, Jeff Hong, Hawoong Jiang, Hua Liao, Albert Mohammad, S. Pathak, Rajeev Pautrat, Alain Albrecht, Joachim Ansermet, Jean-Philippe Ariando, Ariando Aruta, Carmela Bag, Swarup Chang, Kai Chen, Fang-Chung Chen, Yongyao Chibowski, Emil Ciftja, Orion Das, G. P. Kan, Erjun Kitamura, Kyoko Lew Yan Voon, Lok C. Li, Qiuzi Morgan, Benjamin Nieto, Juan Shih, Yanhua Smet, Philippe Song, Juntao Varignon, Julien Vij, J. K. Wen, Hai-Hu Xiao, Haiyan Xu, Jun Zumer, Slobodan
January 13, 2013 12:00 AM
For a normal folding technique, what physical constant(s) set the timescale for the rate at which clothes can be folded?
January 7, 2013 12:53 AM
January 5, 2013 8:25 PM
One classic starting point for quantitative macroeconomic analysis is the assumption of an efficient market. Perturbations can then be applied about this regime, to account for real-world complications. A relative who runs a major division of a biomedical technology firm recently commented that in business, "perfect competition sucks". Business managers look for market segments in which they have an advantage and can obtain a higher level of profit as a result. Essentially, every business is looking for a little local monopoly, a little local market failure to serve. The competition between firms then helps to minimize the magnitudes of these market failures. This point of view brings a striking observation into view: all of the market actors in an economy have as their main goal the avoidance of an efficient market, i.e. the reference point about which much of economic analysis is based. Is there another possible reference point, perhaps a "distributed mini-monopoly" assumption, that could provide an alternative basis for analysis? These two limits remind me in a rough sense of the nearly free electron model and tight binding model for solids: each one is appropriate in a different limit.
January 4, 2013 7:40 PM
Of course, many computer games make use of physics engines, but a few bring out salient fundamental laws of physics in a clear manner. Here are two iPhone games that do an exceptional job in this regard:
- Osmos teaches momentum conservation, gravitation, and orbital mechanics. You are an abstract circular creature that propels by expelling mass; your ejecta engage in inelastic collisions with other objects. Many levels also host gravitational interactions.
- Hundreds is a charming exploration in the statistical mechanics of hard-core gases, with important roles for fluctuations, PV work (when moving barriers), and free volume. It's not entirely clear if momentum is 100% conserved in all collisions (some edge cases seem to produce accelerations) but overall the system acts very "physically".
December 19, 2012 12:11 PM
About which model?
December 19, 2012 12:10 PM
November 26, 2012 4:25 PM
It is generally acknowledged that central planning is ineffective at the level of an entire economy. A unitary central decision maker is neither effective nor efficient. But at smaller scales, individual firms and noncommercial institutions are typically governed in a central manner. Similarly, entire countries' political systems are unitary. If this is in fact the most efficient scheme, then what determines the distinction? If not, then how can the invisible hand be embedded deeper inside organizational governance? Couple to idea that small number of highly paid leaders is an accident in part.
September 20, 2012 1:41 PM
I seem to effect here. First, of course constant shifting potential will not change Oprah's physical system. However if the charge on the earth changes that will be back for containing metallic objects connected to the earth in order to bring those objects to the same potential as the earth. This is a real physical affect. There is I think, the second reason why the earth is such a good ground. The couple diverse atmosphere system as a whole will remain ruffly charge neutral or at least roughly at a particular value of charge set by the earth's interactions with it. So if someone manages to dump a large net charge into the earth at one location that immediately implies that there is similar countercharge somewhere else in the earth atmosphere system. That's overall the potential of the earth does not shift.
September 20, 2012 1:38 PM
A simple calculation based upon the conductive sphere with the same radius reveals that the capacitance the entire earth is relatively low. How then can one consider ground and electrical circuit to be a well-defined constant reference potential? A fraction of a cool discharged ground should based on the earths capacitance, cause global shift in the ground potential.
September 3, 2012 7:39 PM
A thought struck me recently: one could, with something close to present technology, implement an optical system that completely bypasses the native optics of the eye by imaging one's surroundings onto a CCD and then painting that image with scanning lasers pointed directly on the retina. The performance could be much superior to that of our native biological lens, and all the existing retinal and post-retina signal processing could be re-used in situ with no tricky issues of neural interfaces. The normal light paths to the eyes would be blocked: everything you would see would be via the artificial lens system and scanning lasers. The resulting vision could be greatly superior to normal vision (particularly for zoom, low-light performance, and depth of focus). This system would require well-functioning retinas and would be much more complex than the straightforward lens replacements used for e.g. cataracts, so it would not be a typical medical intervention. How many years before some budding cyborg tries this?
August 19, 2012 1:17 AM
Proper ethical behavior in scientific publishing is something for which most folk think they have a good gut instinct, but rarely is the matter discussed rigorously in a broad community setting. Journals publish professional conduct guidelines (have you read them?). Consider one aspect: the distinction between senior and junior authors. Do senior authors have any rights beyond those of junior authors in the publication process? I find it interesting (and encouraging) that the professional conduct guidelines of the major Physics societies don't make any such a distinction. The only mention of senior-versus-junior author that I've seen in an official professional conduct statement puts extra responsibilities on the senior researchers, but no extra rights.
August 12, 2012 8:52 PM
(Apologies for the gap in posting and moderating – I was away on international travel for a large part of the summer). A certain jet-powered boat moving at maximum speed can stop within one and a half boat lengths by abruptly reversing the direction of the jets. The same boat tries to stop in a similar manner after the engine has been working at only half of maximum thrust. What distance is required to stop?
June 12, 2012 4:10 PM
Binary knowledge relationships between objects typically occupy a priviledged place in ontologies: for example: "object A relates to object B; this relationship, call it C, has property D." Sentence structure itself constrains us to this form: subject + verb + object. A sentence can't have two distinct subjects (although it can have one compound subject). Why is this? Perhaps, when referring to actual events and situations, our brains follow a time-ordered train of thought- first one thought, then another, with the type of relationship between the two defined by the larger mental context at that time. Since we have only one dimension of time, a sequence of time-orderd mental events generates a sequence of binary knowledge relationships, i.e. those between neighboring thoughts. It's such a pity that we don't have two or more time axes, so that we could have more interesting thoughts.
April 18, 2012 4:41 PM
When slowly and carefully cooled into solid form, most materials assume a periodic crystalline structure. This suggests that a periodic ordered arrangement of atoms is the lowest-energy or ground state structure for most materials (the exceptions being glassy substances, which have many possible low-energy states all nearly equal in energy). Why are crystals preferred?
April 4, 2012 9:03 PM
I've placed the answer to this puzzle below in white text. Select it in your browser to see it. Energy arguments are subject to errors if some important degrees of freedom are not captured explicitly (i.e. dissipation). Momentum arguments, being based on a fundamental translational symmetry of space, fail only if the system is not isolated. In this case, the momentum argument is correct and the energy argument is wrong. The reason hides in a faulty unstated assumption: the rope cannot follow the prescribed behavior and also be inextensible. In an interval dt, a length vdt of rope must accelerate from 0 to v, but it is attached to a neighboring piece of rope that is already moving uniformly at v. Hence the rope must momentarily stretch and relax. This motion corresponds to microscopic elastic waves, which rapidly thermalize. The external force must provide this work, hence the missing factor of 1/2. It is interesting that these simple arguments are sufficient to determine the rate of energy dissipation.