The Scientific Image (Clarendon Library of Logic and Philosophy)

The Scientific Image
Free download. Book file PDF easily for everyone and every device. You can download and read online The Scientific Image (Clarendon Library of Logic and Philosophy) file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with The Scientific Image (Clarendon Library of Logic and Philosophy) book. Happy reading The Scientific Image (Clarendon Library of Logic and Philosophy) Bookeveryone. Download file Free Book PDF The Scientific Image (Clarendon Library of Logic and Philosophy) at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF The Scientific Image (Clarendon Library of Logic and Philosophy) Pocket Guide.

May show signs of minor shelf wear and contain limited notes and highlighting. Seller Inventory X Condition: Used: Good. Publisher: Oxford University Press , This specific ISBN edition is currently not available. View all copies of this ISBN edition:. From the Back Cover : The Aim of 'The Scientific Image' is to develop an empiricist alternative to both logical positivism and scientific realism.

About the Author : Bas. Shows some Learn more about this copy. Other Popular Editions of the Same Title. Search for all books with this author and title. Customers who bought this item also bought. Stock Image. Published by Oxford University Press Used Quantity Available: 1. Hence the assumption of the realist truth of the Copernican hypothesisexplains the instrumental usefulness of the Ptolemaic one. Such an explanation of the instrumental usefulness of certain theories would not be possible if all theories were regarded as merely instrumental.

If no theory is assumed to be true, then no theoryhas its usefulness explained as following from the truth of another one—granted. But would we have less of anexplanation of the usefulness of the. This would not assume the truth of Copernicus'sheliocentric hypothesis, but would still entail that Ptolemy's simpler description was also a close approximation of those motions.

However, Smart would no doubt retort that such a response pushes the question only one step back: what explains theaccuracy of predictions based on Copernicus's theory? If I say, the empirical adequacy of that theory, I have merelygiven a verbal explanation. For of course Smart does not mean to limit his question to actual predictions—it reallyconcerns all actual and possible predictions and retrodictions. To put it quite concretely: what explains the fact that allobservable planetary phenomena fit Copernicus's theory if they do?

From the medieval debates, we recall thenominalist response that the basic regularities are merely brute regularities, and have no explanation. Smart's main line of argument is addressed to exactly this point. In the same chapter he argues as follows. Suppose thatwe have a theory T which postulates micro-structure directly, and macro-structure indirectly. The statistical andapproximate laws about macroscopic phenomena are only partially spelt out perhaps, and in any case derive from theprecise deterministic or statistical laws about the basic entities.

The n he continues:I would suggest that the realist could say. One would have to suppose that there were innumerable lucky accidents aboutthe behaviour mentioned in the observational vocabulary, so that they behaved miraculously as if they were broughtabout by the nonexistent things ostensibly talked about in the theoretical vocabulary.

The regularities in the observable phenomena mustbe explained in terms of deeper structure, for otherwise we are left with a belief in lucky accidents and coincidences ona cosmic scale. I submit that if the demand for explanation implicit in these passages were precisely formulated, it would at once leadto absurdity. In any case, it seems to me that it is illegitimate to equate being a lucky accident, or a coincidence, with having noexplanation.

It was by coincidence that I met my friend in the market—but I can explain why I was there, and he canexplain why he came, so together we can explain how this meeting happened. We call it a coincidence, not because theoccurrence was inexplicable, but because we did not severally go to the market in order to meet.

The re is nothing here to motivate the demand for explanation, only a restatement inpersuasive terms. The Principle of the Common CauseArguing against Smart, I said that if the demand for explanation implicit in his arguments were precisely formulated, itwould lead to absurdity. I shall now look at a precise formulation of the demand for explanation: Reichenbach'sprinciple of the common cause.

As Salmon has recently pointed out, if this principle is imposed as a demand on ouraccount of what there is in the world, then we are led to postulate the existence of unobservable events and processes. Suppose that two sorts of events are found to have a correlation. A simple example would be that oneoccurs whenever the other does; but the correlation may only be statistical.

The re is apparently a significant correlationbetween heavy cigarette-smoking and cancer, though merely a statistical one. Explaining such a correlation. But, the argument runs, there are of ten among observableevents no common causes of given observable correlations. The refore, scientific explanation of ten requires that therebe certain unobservable events. Reichenbach held it to be a principle of scientific methodology that every statistical correlation at least, every positivedependence must be explained through common causes.

This means then that the very project of science willnecessarily lead to the introduction of unobservable structure behind the phenomena. Scientific explanation will beimpossible unless there are unobservable entities; but the aim of science is to provide scientific explanation; therefore,the aim of science can only be served if it is true that there are unobservable entities.

To examine this argument, we must first see how Reichenbach arrived at his notion of common causes and how hemade it precise. I will then argue that his principle cannot be a general principle of science at all, and secondly, that thepostulation of common causes when it does occur is also quite intelligible without scientific realism.

The classical ideal of science had been to find a method of description of the world so fine that it could yield deterministiclaws for all processes. What Reichenbach argued veryearly on is that this ideal has a factual presupposition: it is not logically necessary that such a fine method of descriptionexists, even in principle. So Reichenbach urged philosophers to abandon that classical ideal as the standard of completeness for a scientifictheory.

Yet it is clear that, if science does not seek for deterministic laws relating events to what happened before them,it does seek for some laws. We can make this precise using the language of probability theory. Let A and B be two events; we use P to designatetheir probability of occurrence. In addition, we. Clearly the probability of rain given that the sky isovercast, is higher than the probability of rain in general. Provided A and B are events which have some positive likelihood of occurrence i. P A , P B are notzero , this is a symmetric relationship.

But because of thesymmetry of A and B in d , this statement by itself gives no reason to think that the smoking produces the cancerrather than the cancer producing the smoking, or both being produced by some other factor, or by several otherfactors, if any. We are speaking here of facts relating to the same time. The cause we seek in the past: heavy smoking at one time isfollowed with certain probabilities by heavy smoking at a later time, and also by being cancerous at that later time.

Wehave in this past event C really found the common cause of this present correlation ifWe may put this as follows: relative to the information that C has occurred, A and B are statistically independent. Wecan define the. C explains the correlation, because we notice a correlation only as long as we do not take C into account. Reichenbach's Principle of the Common Cause is that every relation of positive statistical relevance must be explained bystatistical past common causes, in the above way.

This principle of the common cause is at once precise and persuasive. But it is not a principle that guides twentieth-century science, because it is too close to the demand fordeterministic theories of the world that Reichenbach wanted to reject. I shall show this by means of a schematicexample; but this example will incorporate the sorts of non-classical correlations which distinguish quantum mechanicsfrom classical physics. I refer here to the correlations exhibited by the thought. I maintain in addition that correlations sufficiently similar torefute the principle of the common cause must appear in almost any indeterministic theory of sufficient complexity.

In other words, it is pure chance whether the state to which Stransits is characterized by a given one of the F—attributes, and similarly for the G—attributes, but certain that it ischaracterized by F 1if it is characterized by G 1,byF 2if by G 2, and so on. If we are convinced that this is an irreducible, indeterministic phenomenon, so that S is a complete description of theinitial state, then we have a violation of the principle of the common cause. For from 8 we can deduce 9 which numbers are equal only if n is zero or one—the deterministic case. In all other cases, S does not qualify as thecommon cause of the new state's being F iand G i, and if S is complete, nothing else can qualify either.

The example I have given is schematic and simplified, and besides its indeterminism, it also exhibits a certaindiscontinuity, in that we discuss the transition of a system from one state S into a new state. In classical physics, if aphysical quantity changed its value from i to j it would do so by taking on all the values between i and j in succession,that is, changing continuously.

Would Reichenbach's principle be obeyed at least in some non-trivial, indeterministictheory in which all quantities have a continuous spectrum of values. I think not, but I shall not argue this further. The question is really academic, for if theprinciple requires that, then it is also not acceptable to modern physical science. Could one change a theory which violates Reichenbach's principle into one that obeys it, without upsetting its empiricaladequacy?

Possibly; one would have to deny that the attribution of state S gives complete information about the systemat the time in question, and postulate hidden parameters that underlie these states.

Attempts to do so for quantummechanics are referred to as hidden variable theories, but it can be shown that if such a theory is empirically equivalent toorthodox quantum mechanics, then it still exhibits non-local correlations of a non-classical sort, which would stillviolate Reichenbach's principle.

But again, the question is academic, since modern physics does not recognize the needfor such hidden variables. Could Reichenbach's principle be weakened so as to preserve its motivating spirit, while eliminating its presentunacceptable consequences? Note that in the schematic example I gave, S would then qualifyas a common cause for the events F iand G i.

But so formulated, the principle yields a regress. This regress stops only if, at some point,the exhibited common cause satisfies the original equation 2 , which brings us back to our original situation; or ifsome other principle is used to curtail the demand for explanation. In any case, weakening the principle in various ways and certainly it will have to be weakened if it is going to beacceptable in any sense will remove the force of the realist arguments. Nevertheless, there is a problem here that should be faced. Without a doubt, many scientific enterprises can becharacterized as searches for common causes to explain correlations.

What is the anti-realist to make of this? Are theynot searches for explanatory realities behind the phenomena? I think that there are two senses in which a principle of common causes is operative in the scientific enterprise, andboth are perfectly intelligible without realism. To the anti-realist, all scientific activity is ultimately aimed at greater knowledge of what is observable.

So he can makesense of a search for common causes only if that search aids the acquisition of that sort of knowledge. But surely itdoes! When past heavy smoking is postulated as a causal factor for cancer, this suggests a further correlation betweencancer and either irritation of the lungs, or the presence of such chemicals as nicotine in the bloodstream, or both.

The postulate will be vindicated if such suggested further correlations are indeed found, and will, if so, have aided in thesearch for larger scale correlations among observable events. The re is a second sense in which the principle of the common cause may be operative: as advice for the construction of theories and models. One way to construct a model for a set of observable correlations is to exhibit hidden variableswith which the observed ones are individually correlated.

(PDF) An Objection to Naturalism and Atheism from Logic | Christopher G Weaver - ubopagerycoq.gq

This is a theoretical enterprise, requiring mathematicalembedding or existence pro of s. As a theoretical directive, or as a practical maxim, theprinciple of the common cause may well be operative in science—but not as a demand for explanation which wouldproduce the metaphysical baggage of hidden parameters that carry no new empirical import. I have discussed a number of his views and arguments elsewhere; but will here concentrate on some aspectsthat are closely related to the arguments of Smart, Reichenbach, and Salmon just examined.

The three levels are commonly called those of fact, of empirical law, and of theory. But, as Sellars points out, theories do not explain, or even entail such empirical laws—theyonly show why observable things obey these so-called laws to the extent they do. On the level of the observable we are liable to find only putative laws heavily subject to unwritten ceteris paribusqualifications.

This is, so far, only a methodological point. But a theory which says that the microstructure of things is subject to some exact, universal regularities, must imply the same for those things themselves. This,at least, is my reaction to the points so far. Sellars, however, sees an inherent inferiority in the description of theobservable alone, an incompleteness which requires sub specie the aims of science an introduction of an unobservablereality behind the phenomena.

Observationally unpredictable variation in the rate of dissolution is explained bysaying that the samples are mixtures not compounds of these two observationally identical substances, each of which has a fixed rate of dissolution. In this case we have explanation through laws which have no observational counterparts that can play the same role.

Indeed, no explanation seems possible unless we agree to find our physical variables outside the observable. Butscience aims to explain, must try to explain, and so must require a belief in this unobservable micro-structure. SoSellars contends. The re are at least three questions before us. Did this postulation of micro-structure really have no new consequencesfor the observable phenomena? Is there really such a demand upon science that it must explain—even if the means of explanation bring no gain in empirical predictions?

And thirdly, could a different rationale exist for the use of a microstructurepicture in the development of a scientific theory in a case like this? First, it seems to me that these hypothetical chemists did postulate new observable regularities as well. So Sellar's first contention is false.

We may assume, for the sake of Sellars's example, that there is still no way of predicting dissolving rates any further. Isthere then a categorical demand upon science to explain this variation which does not depend on other observablefactors? Sellars recognized very well that a demand for hidden variables would run counter to the main opinions current inquantum physics.

And consistency is surely a logical stopping-point. This restriction unfortunately does not prevent the disaster. For while there are a number of pro of s that hiddenvariables cannot be supplied so as to turn quantum mechanics into a classical sort of deterministic theory, those pro of sare based on requirements much stronger than consistency. To give an example, one such assumption is that twodistinct physical variables cannot have the same statistical distributions in measurement on all possible states. If such requirements were lifted, and consistency alone were the criterion, hidden variables could indeed beintroduced.

I think we must conclude that science, in contrast to scientific realism, does not place an overriding valueon explanation in the absence of any gain for empirical results. Thirdly, then, let us consider how an anti-realist could make sense of those hypothetical chemists' procedure.

Afterpointing to the new empirical implications which I mentioned three paragraphs ago, he would point to methodologicalreasons. By imagining a certain sort of micro-structure for gold and other metals, say, we might arrive at a theorygoverning many observationally disparate substances; and this might then have implications for new, wider empiricalregularities when such substances interact.

This would only be a hope, of course; no hypothesis is guaranteed to befruitful—but the point is that the true demand on science is not for explanation as such, but for imaginative pictureswhich have a hope of suggesting new statements of observable regularities and of correcting old ones.

The Scientific Image (Clarendon Library of Logic and Philosophy)

This point isexactly the same as that for the principle of the common cause. Demons and the Ultimate ArgumentHilary Putnam, in the course of his discussions of realism in logic and mathematics, advanced several arguments forscientific realism as well. In Philosophy of Logic he concentrates largely on indispensability arguments—concepts of mathematical entities are indispensable to non-elementary mathematics, theoretical concepts are indispensable tophysics.

Putnam attacks this position in a roundabout way, first criticizing bad arguments against Fictionalism, and thengarnering his reasons for rejecting Fictionalism from that discussion. The main bad reason he sees is that of Verificationism. The logical positivists adhered to the verificationist theory of meaning; which is roughly that the totalcognitive content of an assertion, all that is meaningful in it, is a function of what empirical results would verify orrefute it.

Hence, they would say that there are no real differences between two hypotheses with the same empiricalcontent. Consider two theories of what the world is like: Rutherford's atomic theory, and Vaihinger's hypothesis that,although perhaps there are no electrons and such, the observable world is nevertheless exactly as if Rutherford's theorywere true.

Shop now and earn 2 points per $1

The Verificationist would say: these two theories, although Vaihinger's appears to be consistent with thedenial of Rutherford's, amount to exactly the same thing. Well, they don't, because the one says that there are electrons, and the other allows that there may not be. Even if theobservable phenomena are as Rutherford says, the unobservable may be different.

However, the positivists would say,if you argue that way, then you will automatically become a prey to scepticism. You will have to admit that there arepossibilities you cannot prove or disprove by experiment, and so you will have to say that we just cannot know whatthe world is like.

  1. Electrodynamics. Lectures on theoretical physics, Vol. III;
  2. Silent Lives: 100 Biographies of the Silent Film Era.
  3. The Great Airport Mystery (The Hardy Boys, Original Series, Book 9);
  4. Swinging: The Games Your Neighbours Play.

Worse; you will have no reason to reject any number of outlandish possibilities; demons, witchcraft,hidden powers collaborating to fantastic ends. Putnam considers this argument for Verificationism to be mistaken, and his answer to it, strangely enough, will alsoyield an answer to the Fictionalism rejected by the verificationist. To dispel the bogey of scepticism, Putnam gives us acapsule introduction to contemporary Bayesian epistemology: Rationality requires that if. Where do we get our a priori plausibility orderings?

The se wesupply ourselves, either individually or as communities: to accept a plausibility ordering is neitherto make a judgment of empirical fact nor to state a theorem of deductive logic; it is to take a methodological stand. Does eachsimply report the stand he has taken, and add: this is, in my view, the stand of all rational men? How disappointing. Actually, it does not quite go that way. Putnam has skilfully switched the discussion from electrons to demons, andasked us to consider how we could rule out their existence.

As presented, however, Vaihinger's view differed fromRutherford's by being logically weaker—it only withheld assent to an existence assertion. It follows automatically thatVaihinger's view cannot be a priori less plausible than Rutherford's. He has himself just argued forcefully that theoriescould agree in empirical content and differ in truth-value.

Hence, a realist will have. The decision to leap is subject to rational scrutiny, but not dictated by reason and evidence. He begins with a formulation of realism which he says he learned from MichaelDummett:A realist with respect to a given theory or discourse holds that 1 the sentences of that theory are true or false;and 2 that what makes them true or false is something external—that is to say, it is not in general our sense data,actual or potential, or the structure of our minds, or our language, etc.

Because the wide discussion of Dummett's views has given some currencyto his usage of these terms, and because Putnam begins his discussion in this way, we need to look carefully at thisformulation. In my view, Dummett's usage is quite idiosyncratic. Putnam's statement, though very brief, is essentially accurate. But he says that in some cases he wishes to discuss, such as thereality of the past and intuitionism in mathematics, the central issues seem to him to be about other questions. For thisreason he proposes a new usage: he will take such disputesas relating, not to a class of entities or a class of terms, but to a class of statements.

Realism I characterize as thebelief that statements of the disputed class possess an objective truth-value, independently of our means of knowing it: they are true or false in virtue of a reality existing independently of us. The anti-realist opposes to thisthe view that statements of the disputed class are to be understood only by reference to the sort of thing which wecount as evidence for a statement of that class.

It might be objected that if you take this position then you have a decision procedure fordetermining the truth-values of these statements false for existentially quantified ones, true for universal ones, applytruth tables for the rest. Does that not mean that, on your. Not at all; for you clearly believe that if we had notexisted, and a fortiori had had no knowledge, the state of affairs with respect to abstract entities would be the same.

Has Dummett perhaps only laid down a necessary condition for realism, in his definition, for the sake of generality? Ido not think so. In any traditional sense, this is a realist position with respect toquantum mechanics. We note also that Dummett has, at least in this passage, taken no care to exclude non-literal construals of the theory, aslong as they are truth—valued. Certainly the arguments in which he engages are pr of ound, serious, and worthy of our attention.

But it seems to methat his terminology ill accords with the traditional one. Certainly I wish to define scientific realism so that it need notimply that all statements in the theoretical language are true or false only that they are all capable of being true or false,that is, there are conditions for each under which it has a truth-value ; to imply nevertheless that the aim is that thetheories should be true.

And the contrary position of constructive empiricism is not anti-realist in Dummett's sense,since it also assumes scientific statements to have truth-conditions entirely independent of human activity orknowledge. But then, I do not conceive the dispute as being about language at all. In any case Putnam himself does not stick with this weak formulation of Dummett's.

A little later in the paper hedirects himself to scientific realism per se, and formulates it in terms borrowed, he says, from Richard Boyd. The newformulation comes in the course of a. It is not even surprising to thescientific Darwinist mind. For any scientific theory is born into a life of fierce competition, a jungle red in tooth andclaw.

Only the successful theories survive—the ones which in fact latched on to actual regularities in nature. Infact,theauthor's interest in hidden-variable theories waskindled only when recently he became aware of the possibility of such experimental tests. On the other hand, we do not want to ignore the metaphysical implications of the theory.

Belinfante, Foreword, A Survey of Hidden-Variable The ories The realist arguments discussed so far were developed mainly in a critique of logical positivism. Much of that critiquewas correct and successful: the positivist picture of science no longer seems tenable. Since that was essentially the onlypicture of science within philosophical ken, it is imperative to develop a new account of the structure of science.

Thisaccount should especially provide a new answer to the question: what is the empirical content of a scientific theory? ModelsBefore turning to examples, let us distinguish the syntactic approach to theories from the semantic one which I favour. Modern axiomatics stems from the discussion of alternative geometric theories, which followed the development of non-Euclidean geometry in the nineteenth century.

The first meta-mathematics was meta-geometry a term alreadyused in Bertrand Russell's Essays on the Foundations of Geometry in It will be easiest perhaps to introduce therelevant axiomatic concepts by way of some simple geometric theories. Consider the axioms: 2A0 The re is at least one line. The first four axiomsare easily seen to be true of this structure: the line DEF i. Any structure which satisfies the axioms of a theory in this way is called a model of that theory. Hence, the structure just exhibited is a model of T 1, andalso of T 0, but not of T 2.

The existence of a model establishes consistency by a very simple straight-forward argument:all the axioms of the theory suitably interpreted are true of the model; hence all the theorems are similarly true of it; but no contradiction can be true of anything; therefore, no theorem is a contradiction.

Thus logical claims, formulated in purely syntactic terms, can nevertheless of ten be demonstrated more simply by adetour via a look at models—but the notions of truth and model belong to semantics. Nor is semantics merely the handmaiden of logic. For look at the theories T 1and T 2; logic tells us that these areinconsistent with each other, and there is an end to it. The axioms of T 1can only be satisfied by finite structures; theaxioms of T 2, however, are satisfied only by infinite ones such as the Euclidean plane. Yet you will have noticed that I drew a Euclidean triangle to convey what the Seven Point Geometry looks like.

Forthat seven-point structure can be embedded in a Euclidean structure. We say that one structure can be embedded inanother, if the first is isomorphic to a part substructure of the second. Isomorphism is of course total identity of structure and is a limiting case of embeddability: if two structures are isomorphic then each can be embedded in theother. The seven-point geometry is isomorphic to a certain Euclidean plane figure, or in other words, it can beembedded in the Euclidean plane. This points to a much more interesting relationship between the theories T 1and T 2than inconsistency:every model of T 1can be embedded in identified with a substructure of a model of T 2.

The syntactic picture of a theory identifies it with a body of theorems, stated in one particular language chosen for theexpression of that theory. This should be contrasted with the alternative of presenting a theory in the first instance byidentifying a class of structures as its models. In this second, semantic, approach the language used to express thetheory is neither basic nor unique; the same class of structures could well be described in radically different ways, eachwith its own limitations.

The models occupy centre stage. Scientists too speak of models, and even of models of a theory, and their usage is somewhat different. It refers rather to a type of structure, or class of structures, all sharingcertain general characteristics. For in that usage, the Bohr model was intended to fit hydrogen atoms, helium atoms,and so forth.

Whenever certain parametersare left unspecified in the description of a structure, it would be more accurate to say contrary of course to commonusage and convenience that we described a structure-type. Rather than pursue this general discussion I turn now to a concrete example of a physical theory, in order to introducethe crucially relevant notions by illustration. Apparent Motion and Absolute SpaceWhen Newton wrote his Mathematical Principles of Natural Philosophy and System of the World, he carefully distinguished thephenomena to be saved from the reality to be postulated.

Ptolemy describedthese motions on the assumption that the earth was stationary. For him, there was no distinction between true andapparent motion: the true motion is exactly what is seen in the heavens. What that motion is, may of course not beevident at once: it takes thought to realize that a planet's motion really does look like a circular motion around amoving centre.

In Copernicus's theory, the sun is stationary. Hence, what we see is only the planets' motion relative tothe earth, which is not itself stationary. The apparent motion of the planets is identified as the difference between theearth's true motion and the planets' true motion—true motion being, in this case, motion relative to the sun. Finally,Newton, in his general mechanics, did not assume that either the earth or the sun is stationary.

And he generalized theidea of apparent motion—which is motion relative to the earth—to that of motion of one body relative to another. Wecan speak of the planets' motion relative to the sun, or to the earth, or to the moon, or what have you. What isobserved is always some relative motion: an apparent motion is a motion relative to the observer. And Newton heldthat relative motions are always identifiable as a difference between true motions, whatever those may be an assertionwhich can be given precise content using vector representation of motion. For brevity, let us call these relational structures appearances.

In the mathematical model provided byNewton's theory, bodies are located in Absolute Space, in which they have real or absolute motions. But within thesemodels we can define structures that are meant to be exact reflections of those appearances, and are, as Newton says,identifiable as differences between true motions. The se structures, defined in terms of the relevant relations betweenabsolute locations and absolute times, which are the appropriate parts of Newton's models, I shall call motions,borrowing Simon's term.

When Newton claims empirical adequacy for his theory, he is claiming that his theory has some model such that allactual appearances are identifiable with isomorphic to motions in that model. This refers of course to all actual appearancesthroughout the history of the universe, and whether in fact observed or not. It is part of his theory that there is such a thing as Absolute Space,that absolute motion is motion relative to Absolute Space, that absolute acceleration causes certain stresses and strainsand thereby deformations in the appearances, and so on. He of fered in addition the hypothesis his term that the centre of gravity of the solar system is at rest in Absolute Space.

This is the case for two reasons: differencesbetween true motions are not changed if we add a constant factor to all velocities; and force is related to changes inmotion accelerations and not to motion directly. Let us call Newton's theory mechanics and gravitation TN, and TN v the theory TN plus the postulate that thecentre of gravity of the solar system has constant absolute velocity v.

By Newton's own account, he claims empiricaladequacy for TN 0 ; and also that, if TN 0 is empirically adequate, then so are all the theories TN v. For now,let us agree that these theories are empirically equivalent, referring objections to a later section. For each of the theories TN v has suchconsequences as that the earth has some absolute velocity, and that Absolute Space exists.

In each model of each theoryTN v there is to be found something other than motions, and there is the rub. To believe a theory is to believe that one of its models correctly represents the world.

  1. The Digital Negative: Raw Image Processing in Lightroom, Camera Raw, and Photoshop!
  2. The Vilna Vegetarian Cookbook: Garden-Fresh Recipes Rediscovered and Adapted for Todays Kitchen.
  3. Une histoire de famille.?
  4. The Emergence of Norms (Clarendon Library of Logic and Philosophy). | eBay.

You can think of the models asrepresenting the possible worlds allowed by the theory; one of these possible worlds is meant to be the real one. Tobelieve the theory is to believe that exactly one of its models correctly represents the world not just to some extent,but in all respects. The refore, if we believe of a family of theories that all are empirically adequate, but each goesbeyond the phenomena, then we are still free to believe that each is false, and hence their common part is false.

Forthat common part is phraseable as: one of the models of one of those theories correctly represents the world. Its singleaxiom can be the assertion that TN 0 is empirically adequate: that TN 0 has a model containing motions isomorphicto all the appearances. Since TN 0 can be stated in English, this completes the job. It may be objected that so stated, TNE does not look like a physical theory. Indeed it looks metalinguistic.

sweeplandcenremic.cf/justice-older-than-the-law.php This is apoor objection. The theory is clearly stated in English, and that suffices. Whether or not it is axiomatizable in somemore restricted vocabulary may be a question of logical interest, but philosophically it is irrelevant. Secondly, if the set of models of TN 0 can be described without metalinguistic resources, then the above statement of TNE is easilyturned into a non-metalinguistic statement too. Not that this matters. The only important point here is that theempirical import of a family of empirically equivalent theories is not usually their common part, but can becharacterized directly in the same terms in which empirical adequacy is claimed.

The ories and The ir Extensions The objection may be raised that theories can seem empirically equivalent only as long as we do not consider theirpossible extensions. When we consider their application beyond the originally intended domain of application, or theircombination with other. This example is imperfect, for it was known that the two theories disagreedeven on macroscopic phenomena over sufficiently long periods of time. A perfect example can be constructed as a piece of quite realistic science fiction: let us imagine that such experimentsas Michelson and Morley's, which led to the rise of the theory of relativity, did not have their spectacular, actual, nulloutcome,and that Maxwell's theory of electromagnetism was successfully combined with classical mechanics.

Inretrospect we realize that such a development would have upset even Newton's deepest convictions about the relativity of motion; but we can imagine it. Electrified and magnetic bodies appear to set each other in motion although they are some distance apart. Early in thenineteenth century mathematical theories were developed treating these phenomena in analogy with gravitation, ascases of action at a distance, by means of forces which such bodies exert on each other.

But the analogy could not beperfect: it was found necessary to postulate that the force between two charged particles depends on their velocity aswell as on the distance. Adapting the idea of a universal medium of the propagation of light and heat the luminiferous medium, or ether found elsewhere in physics, Maxwell developed his theory of the electromagnetic field, which pervades the whole of space:It appears therefore that certain phenomena in electricity and magnetism lead to the same conclusions as those of optics, namely, that there is an ethereal medium prevading all bodies, and modified only in degree by theirpresence.

Maxwell's Equations describe how this field develops in time. The difficulties with Maxwell's theory concerned the mechanics of this medium; and his ideas about what this mediumwas like, were not successful. But this did not blind the nineteenth century to the power and adequacy of his equationsdescribing the electromagnetic. Itwould therefore not be appropriate to call Maxwell's theory a mechanical theory, but it did have mechanical models.

The re was this strange new feature, however; theforces depend on the velocities, and not merely on the accelerations. The re was accordingly a spate of thoughtexperimentsdesigned to measure absolute velocity. Inmeasuring this attraction, we shall measure the velocity of the earth; not its velocity in relation to the sun or thefixed stars, but its absolute velocity. But let us imagine that the classical expectations were not disappointed. Imagine that values are foundfor absolute velocities; specifically for the centre of gravity of the solar system. In that case, it would seem, one of thetheories TN v may be confirmed and all the others falsified.

Hence those theories were not empirically equivalent afterall. But the reasoning is spurious. The definition of empirical equivalence did not rely on the assumption that only absoluteacceleration can have discernible effects. Newton made the distinction between sensible measures and apparentmotions on the one hand, and true motions on the other, without presupposing more than that basic mechanics withinwhich there are models for Maxwell's equations. This assertion was the reason for the claim of empirical equivalence.

The question before us is whether that assertion was controverted by those nineteenth-centuryreflections. The answer is definitely no. The thought-experiment, we may imagine, confirmed the theory that adds to TN thehypotheses:H0 The centre of gravity of the solar system is at absolute rest. This theory has a consequence strictly about appearances:CON Two electrified bodies moving with velocity v relative to the centre of gravity of the solar system, attract eachother with force F v.

However, that same consequence can be had by adding to TN the two alternative hypotheses:Hw The centre of gravity of the solar system has absolute velocity w. More generally, for each theory TN v there is an electromagnetic theory E v such that E 0 is Maxwell's and all thecombined theories TN v plus E v are empirically equivalent with each other. Only familiar examples, but rightly stated, are needed it seems, to show the feasibility of concepts of empirical adequacy and equivalence. In the remainder of this chapter I shall try to generalize these considerations,while showing that the attempts to explicate those concepts syntactically had to reduce them to absurdity.

Extensions: Victory and Qualied Defeat The idea that theories may have hidden virtues by allowing successful extensions to new kinds of phenomena, is toopretty to be left. Developed independently of the example in the last section it might yet trivialize empirical equivalence. Nor is it a very new idea. In the first lecture of his Course de philosophie positive, Comte referred to Fourier's theory of heatas showing the emptiness of the debate between partisans of calorific matter and kinetic theory. The illustrations of empirical equivalence have that regrettable tendency to date; calorifics lost.

To evaluate this suggestion we must ask what exactly is an extension of a theory. Let us suppose as in the last sectionthat experiments did indicate the combined theory TN 0 plus E 0. In that case we would surely say that mechanicshad been successfully extended to electromagnetism. What, then, is a successful extension?

The re were mechanical models of electromagnetic phenomena; and also of the phenomena more traditionally subjectto mechanics. What we have supposed is that all these appearances could jointly find a home among the motions in asingle model of TN 0. Certainly, we have here an extension of TN 0 , but first and foremost we have a victory.

We havean extension, for the class of models that may represent the phenomena has been narrowed to those which satisfy theequations of electromagnetism. But it is a victory for TN 0 because it simply bears out the claim that TN 0 isempirically adequate: all appearances can be identified with motions in one of its models. Such victorious extensions can never distinguish between empirically equivalent theories in the sense in which thatrelation was described above, for such theories have exactly the same resources for modelling appearances.

It followslogically from the italicized description in Section 2 that if one theory enjoys such a victory, then so will all thoseempirically equivalent to it. So if Enriques's idea is to be correct at all, there must be other sorts of extensions, which are not victories.

Let ussuppose that a theory is confronted with new phenomena, and these are not even piece-wise identifiable with motionsin the models of that theory. The re seemsto be one possibility intermediate between victory and total defeat. The class of substructures called motions might, forexample, be widened to a larger class; let us say, pseudo-motions.

And the theory might be weakened, so that it wouldclaim only that every appearance can be identified with a pseudo-motion. This would be a defeat, for the claim that the old theory is empirically adequate has been rescinded. But still it may becalled an extension rather than a replacement, for the class of models the over-all structures within which motions andpseudo-motions are. It is therefore an extension, which is not a victory but anyway a qualified defeat.

It is not so easy to find an example of this kind of extension within the sphere of mechanics, but the following may beone. Brian Ellis constructed a theory in which no forces are postulated, but the available motions are the same as inNewton's mechanics plus the postulate of universal gravitation. But Ellis has pointed out that Newton's theory has a certainkind of superiority in that, if the effect of gravitation is just slightly different, then Newton's theory is much more easilyamended than his. In other words, if Newton's theory turned out wrong in its astronomical predictions, there is anobvious way to try and repair it, without touching his basic laws of motion.

It is possible to construe this as follows: the two theories are empirically equivalent, but Newton's allows of certainobvious extensions of the second sort. To see it this way, one has to take the law G of universal gravitation as definingthe motions described in terms of relative distances in Newton's models: a motion is a set of trajectories for whichmasses and forces can be found such that Newton's laws of motion and G are satisfied.

It will, however, be clear that the second sort of extension is a defeat. The re is a certain kind of superiority perhaps inthe ability to sustain qualified rather than total defeat. But it is a pragmatic superiority. It cannot serve to upset theconclusion that two theories are empirically equivalent, for it does not show that they differ in any way not evenconditionally, not even counterfactually in their empirical import. Let me close this section with an example of another sort of pragmatic superiority, which strikes me as quite similar.

Suppose that two proposed theories have different axioms, but turn out to have the same theorems and the samemodels, and the same specification of empirical substructures. I do not suppose that anyone would think that thesetwo theories say different things. Even so, there may be a recognizable superiority, which appears when we attempt togeneralize them. An interesting example of this is given. When he wrote his own theory, von Neumann could have chosen either of the following principles concerningcombination of observable quantities to serve as an axiom In fact, von Neumann chose 1.

When he then came to the question of hidden variables, he showed that theirexistence would contradict the generalization of his basic axioms to states supplemented with hidden variables. However, it can easily be shown that any reasonable hidden variable theory must reject the generalization of 1,although it can accept 2. Had von Neumann chosen his axioms differently, he might well have reached the conclusionthat 1 can be demonstrated for all quantum-mechanical states, but does not hold for the postulable underlyingmicrostates—and hence, that there could be hidden variables after all.

Such pragmatic superiorities of one theory over another are of course very important for the progress of science. Butsince they can appear even between different formulations of the same theory, and also may only show up in actualdefeat, they are no reflection on what the theory itself says about what is observable.

Failure of the Syntactic ApproachSpecific examples of empirical adequacy and equivalence should suffice to establish the correctness and non-triviality of these concepts; but we need an account of them in general. It is here that the syntactic approach has mostconspicuously been tried, and has most conspicuously failed. The syntactic explication of these concepts is familiar for it is the backbone of the account of science developed by thelogical positivists.

A theory is to be conceived as what logicians call a deductive theory, hence, a set of sentences thetheorems , in a specified. The vocabulary is divided into two classes, the observational terms and the theoretical terms. Let us call theobservational sub-vocabulary E. An extension of a theory is just an axiomaticextension. Obvious questions were raised and settled. A theory would seem not to be usable by scientists if it is not axiomatizable.

But logicians attached importance to questionsabout restricted vocabularies, and that was seemingly enough for philosophers to think them important too. A more philosophical problem was apparently posed by the very distinction between observational and theoreticalterms. Certainly in some way every scientific term is more or less directly linked with observation. The empirical import of a theory cannot be isolated in this syntactical fashion, by drawing adistinction among theorems in terms of vocabulary.

But any unobservable entity will differ from the observableones in the way it systematically lacks observable characteristics. As long as we do not abjure negation, therefore, weshall be able to state in the observational vocabulary however conceived that there are unobservable entities, and, tosome extent, what they are like.

The quantum theory, Copenhagen version, implies that there are things whichsometimes have a position in space, and sometimes have not. This consequence I have just stated without using asingle theoretical term. Newton's theory implies that there is something to wit, Absolute Space which neither has aposition nor occupies a volume. Such consequences.

Thus on the syntactic approach, the distinction between truth and empirical adequacy reduces to triviality or absurdity,it is hard to say which. Similarly for empirical equivalence. But the former states that there issomething to wit, Absolute Space which is different from every appearance by lacking even those minimalcharacteristics which all appearances share. Philosophers seem to have been bothered more by ways in which the syntactic definition of empirical equivalencemight be too broad.

Such theoriespresumably derive their empirical import from the consequences they have when conjoined with other theories orempirical hypotheses.

Philosophy of science

To eliminate this embarrassment, extensions of theories were considered. TN 0 and TNE are again declarednon-equivalent. Worse yet. TN 0 is no longer empirically equivalent to the other theories TN v.

Shop with confidence

Scientific Representation: Paradoxes of Perspective by Bas C. van Fraassen Paperback $ The Aim of 'The Scientific Image' is to develop an empiricist alternative to both logical positivism and scientific realism. Bas van Fraassen rejects the current trend in philosophy of. Editorial Reviews. Review. "An excellent extreé to the current debates on this topic, as seen by van Fraassenn who is probably the most direct and severe.

This is shown by theexamples of spurious reasoning in Section 4 above: TN 0 plus E 0 is not equivalent to TN v plus E 0 for non-zerovalues of v. But all the theories TN v are empirically equivalent. Nor is it easy to see how we could restrict the class of axiomatic extensions to be considered so as to repair this deficiency.

The se criticisms should suffice to show that the flaws in the linguistic explication of the empirical import of a theoryare not minor or superficial. The y do not, of course, constitute an a priori. But such a project loses all interest when it appears so clearlythat, even if such a language could exist, it would not help us to separate out the information which a theory gives usabout what is observable.

It seems in addition highly unlikely that such a language could exist. For at the very least, if itexisted it might not be translatable into natural language. An observation language would be theoretically neutral at alllevels. So if A and B are two of its simplest sentences, they would be logically independent. Pursuing such questions further does not seem likely to shed any light on the nature or structure of science. The syntactically defined relationships are simply the wrong ones. Perhaps the worst consequence of the syntacticapproach was the way it focused attention on philosophically irrelevant technical questions.

The main lesson of twentieth-century philosophy of sciencemay well be this: no concept which is essentially language-dependent has any philosophical importance at all. The Hermeneutic CircleWe have seen that we cannot interpret science, and isolate its empirical content, by saying that our language is dividedinto two parts.

Nor should that conclusion surprise us. The phenomena are saved when they are exhibited asfragments of a larger unity. For that very reason it would be strange if scientific theories described the phenomena, theobservable part, in different terms from the rest of the world they describe.

And so an attempt to draw the conceptualline between phenomena and the trans-phenomenal by means of a distinction of vocabulary, must always have lookedtoo simple to be good. But there has been a further assumption common also to critics of that distinction: that thedistinction is a philosophical one. To draw. To draw it, in principle anyway,philosophy must mobilize theories of sensing and perceiving, sense data and experiences, Erlebnisse and Protokolsaetze.

Ifthe distinction is a philosophical one, then it is to be drawn, if at all, by philosophical analysis, and to be attacked, if atall, by philosophical arguments. This attitude needs a Grand Reversal. If there are limits to observation, these are a subject for empirical science, andnot for philosophical analysis.

Nor can the limits be described once and for all, just as measurement cannot bedescribed once and for all. What goes on in a measurement process is differently described by classical physics and byquantum theory. To find the limits of what is observable in the world described by theory T we must inquire into Titself, and the theories used as auxiliaries in the testing and application of T. I want to spell this out in detail, becauseone might too easily get a feeling of vicious circularity.

Browse more videos

And I want to give specific details on how science exhibits clearlimits on observability. Recall the main difference between the realist and anti-realist pictures of scientific activity. When a scientist advances anew theory, the realist sees him as asserting the truth of the postulates. But the anti-realist sees him as displaying thistheory, holding it up to view, as it were, and claiming certain virtues for it.

This theory draws a picture of the world.