Tuesday, March 3, 2009

Defining Measures Over Spaces Richer Than The Continuum

Cian Dorr asked an interesting question in the comments of my previous post:

I wonder how you would do physics in a spacetime finite volumes
of which contain more than continuum many points? The physical theories of spacetime I'm familiar with are all based in ordinary differential geometry, which is about finite-dimensional manifolds, which by definition are locally isomorphic to R^n. I don't know how you'd begin to define, e.g., the notion of the gradient of a scalar field, if you were trying to work in something bigger.

I'm probably not comfortable enough with the maths to come up with elegant treatments of mathematics with higher cardinalities than the continuum, but here's one way of doing it, though it is a bit kludgy. Let us start with a space that has 2^continuum many points. Instead of defining the fields, measures etc. over points, define them over equivalence classes of points, where the equivalence classes contain 2^continuum-many points each. For example, the distance measure needed to get the right predictions in the physics we do doesn't treat as different all the points "around" a given point. (I’ll talk in this entry about “space” though the remarks will carry over straightforwardly to spacetime.)

You might wonder what right these so-called points have to be called "points" if e.g. a distance metric does not distinguish them. Why aren’t the equivalence classes better candidates to be identified as the "points"? But there are a few answers available: maybe there's also a more discerning and more natural function F from points to somethings that does distinguish the points inside our equivalence classes, and our distance measure is a crude abstraction from F that is good enough for practical purposes. Or maybe the natural relation in the area is not a function on equivalence classes, but a relation between points that is uniform across these equivalence classes: that is, when we have two equivalence classes C and D, then if any member of C stands in R to any member of D, then every member of C stands in R to every member of D. Or it should be easy enough to come up with other marks of distinction that the members of the equivalence classes have that make them better candidates to count as points than the classes - maybe the members are the ultimate parts of spacetime, for example, or maybe we have general reasons for thinking classes can’t be points.

So the members of the equivalence classes might be the genuine points, and arbitrary fusions of them might have a good claim to be regions, thus giving us more than Beth2 regions. But the physical theories need not operate very differently - it just turns out that where we treated our mathematical physics as defining fields, measures, etc. over points, it should instead be treated as defining those quantities over equivalence classes of points. Of course, we are left with the question of what the fundamental physical relationships are that we are modelling with our functions, but I hope I’ve said enough to indicate that there are a number of options here.

If the model I have just given works, then it will be trivial to carry out a similar procedure to generate models of larger spaces: simply ensure that the equivalence classes contain N points, where N can be any cardinal you like. It does not work as smoothly once each equivalence class has more than set-many points, though that raises quite different sorts of problems.

These models mimic standard physics, and for some purposes we might want to see what sorts of models of higher-cardinality spaces we could come up with that exploit the extra structure of those spaces to produce more complicated “physical” structures. But the sort of model I described should be enough to raise an epistemic question I alluded to in my previous post. If physics as we do it would work just as well in these richer spaces, why be so sure that we are not in one of these spaces? I think some appeal to simplicity or parsimony is good enough to favour believing we are not in such a higher-cardinality space. But I seem to be more of a fan of parsimony than a lot of people. This case might be another one to support the view that many physicists implicitly employ parsimony considerations in theory choice, perhaps even considerations of quantitative parsimony!

6 comments:

  1. When you say a spacetime such that its finite regions have more than continuum many points, what kind of structures are you thinking of?

    If, say, we were basing our spacetime on the long line (beth2 copies of [0,1), say) then it's not clear to me what the equivalence relation would be? (The only beth2 sized regions would be unbounded regions.)

    Also it seems kind of hard to assign a natural distance relation between points in this sort of spacetime, unless distances themselves are taken from the long line.

    Would R^beth2 have this property too? If so, I again can't see what the equivalence relation would be. I guess you were thinking of the kind of large lines you get when you apply the LS theorem to analysis though?

    ReplyDelete
  2. Daniel,

    Interesting conclusion. But shouldn't we still distinguish interesting from uninteresting ways in which theory choice is guided by parsimony?

    Your suggestion reminds me of the Kukla algorithms for generating empirically underdetermined theories. It's a cool idea. But no one takes these constructions seriously, because they are just one of enumerably many contrived tricks.

    Similarly, as you point out, one can tack on extra points to each moment in spacetime, and then posit that the extra points don't play any interesting role in the model. And no physicist or philosopher should adopt such a spacetime. But that's not because of parsimony -- its because such models are silly!

    However, there still do exist interesting cases of underdetermination, as well as interesting cases of theory choice involving parsimony. For example, parsimony seems to play a clear role in the choice of which theory best explains the accelerating cosmological expansion.

    ReplyDelete
  3. Unsurprisingly, I don't like Daniel's model. Does this suggest that I am generally committed to "parsimony" (tying to Robbie's comment on a previous post)? No. There are two *kinds* of invocation of parsimony

    PARSIMONY INVOCATION, KIND 1
    ---
    Suppose we have two theories:

    S: postulates an ontology containing n As.
    T: postulates an ontology containing m > n Bs.

    At this point, some might invoke "parsimony" to favour S over T. I'm dubious: if all we want to do is avoid multiplying entities beyond necessity, then both S and T may have avoided multipyling As (respectively Bs) beyond necessity. I'm inclined to think that there's an incommensurability between their commitments.

    PARSIMONY INVOCATION, KIND 2
    ---
    Suppose instead we have:

    U: postulates an ontology containing n As
    V: postulates an ontology containing m > n As

    Clearly their commitments are commensurable. Now U seems to win over V on grounds of parsimony. (Even then, there might be reasons to favour U over V, if U is much harder to use than V.)

    My rejection of Daniel's model is a case of kind 2. Daniel's model postulates extra points (so the ontological commitments are directly comparable), and as Bryan says, these extra points don't play any interesting role in the theory, so don't make it easier to use.

    There are kinks to work out (we need to say exactly when it is that theories postulate entities of the same kind, and so when their ontologies are comparable.) But however the fineprint on *that* goes, I think we can safely reject Daniel's model without finding ourselves generally attached to "desert landscapes".

    ReplyDelete
  4. On a different train of thought: To get large models where the points actually seem to *do* something in the model, can't we use the techniques of non-standard analysis?

    ReplyDelete
  5. Tim (on the first train of thought): I agree there are different forms of parsimony, and believing in some sorts won't get you to a general desert landscape ontology.

    It looks to me like your second kind of parsimony is the one I suggested was the correct form of a quantitative parsimony principle in my paper "Quantitative Parsimony". But maybe I'm reading too much into it. Even that much parsimony would be too much for a lot of philosophers, or so it seems to me. If this kind of case got people to sign up to as much your second kind of parsimony, that would be welcome progress.

    ReplyDelete
  6. Thanks for the suggestion. I've just now read the paper. Interesting stuff. There's some overlap between our positions, but I'm not sure how much.

    In the terms at the start of your paper: I'm *not* advocating qualitative parsimony. My advocacy of quantitative parsimony is also pretty cautious. Plenty of theories are incommensurable, as far as "parsimony" is concerned. For example, theory A says that there is just one kind, but beth_4 instances of it; theory B says that there are aleph_0 kinds, each of which has finitely many instances; I'm doubtful that "parsimony" has anything valuable to say here. I expect you'll agree.

    Of course you're right that it's better to postulate 1 neutrino (rather than 17 million) in each case of Beta decay. Perhaps, though, that's explicable on Lakatosian grounds. Postulate a single entity, then *test* that postulation: see if you can get a "fractional" decay, or some such. (Similarly for the chemical examples: postulate that water is H2O, but then see if you can decompose your water molecule so that 4 hydrogen and 2 oxygen atoms result.) Once you many and interesting tests fail, you've some good reason to suppose that there's just one thing. But is the reason in question just one of "parsimony"? Not sure.

    Probably similar comments apply to your model of spacetime: see if you can somehow get at the individual points, rather than always having to deal with the equivalence classes.

    ReplyDelete