Jump to content

Rule of inference

From Wikipedia, the free encyclopedia
(Redirected from Laws of Logic)

In logic and the philosophy of logic, specifically in deductive reasoning, a rule of inference, inference rule or transformation rule is a logical form consisting of a function which takes premises, analyzes their syntax, and returns a conclusion (or conclusions).

For example, the rule of inference called modus ponens takes two premises, one in the form "If p then q" and another in the form "p", and returns the conclusion "q". The rule is valid with respect to the semantics of classical logic (as well as the semantics of many other non-classical logics), in the sense that if the premises are true (under an interpretation), then so is the conclusion.

Typically, a rule of inference preserves truth, a semantic property. In many-valued logic, it preserves a general designation. But a rule of inference's action is purely syntactic, and does not need to preserve any semantic property: any function from sets of formulae to formulae counts as a rule of inference. Usually only rules that are recursive are important; i.e. rules such that there is an effective procedure for determining whether any given formula is the conclusion of a given set of formulae according to the rule. An example of a rule that is not effective in this sense is the infinitary ω-rule.[1]

Popular rules of inference in propositional logic include modus ponens, modus tollens, and contraposition. First-order predicate logic uses rules of inference to deal with logical quantifiers.

Standard form

[edit]

In formal logic (and many related areas), rules of inference are usually given in the following standard form:

  Premise#1
  Premise#2
        ...
  Premise#n   
  Conclusion

This expression states that whenever in the course of some logical derivation the given premises have been obtained, the specified conclusion can be taken for granted as well. The exact formal language that is used to describe both premises and conclusions depends on the actual context of the derivations. In a simple case, one may use logical formulae, such as in:

This is the modus ponens rule of propositional logic. Rules of inference are often formulated as schemata employing metavariables.[2] In the rule (schema) above, the metavariables A and B can be instantiated to any element of the universe (or sometimes, by convention, a restricted subset such as propositions) to form an infinite set of inference rules.

A proof system is formed from a set of rules chained together to form proofs, also called derivations. Any derivation has only one final conclusion, which is the statement proved or derived. If premises are left unsatisfied in the derivation, then the derivation is a proof of a hypothetical statement: "if the premises hold, then the conclusion holds."

Example: Hilbert systems for two propositional logics

[edit]

In a Hilbert system, the premises and conclusion of the inference rules are simply formulae of some language, usually employing metavariables. For graphical compactness of the presentation and to emphasize the distinction between axioms and rules of inference, this section uses the sequent notation () instead of a vertical presentation of rules. In this notation,

is written as .

The formal language for classical propositional logic can be expressed using just negation (¬), implication (→) and propositional symbols. A well-known axiomatization, comprising three axiom schemata and one inference rule (modus ponens), is:

(CA1) ⊢ A → (BA)
(CA2) ⊢ (A → (BC)) → ((AB) → (AC))
(CA3) ⊢ (¬A → ¬B) → (BA)
(MP) A, ABB

It may seem redundant to have two notions of inference in this case, ⊢ and →. In classical propositional logic, they indeed coincide; the deduction theorem states that AB if and only if ⊢ AB. There is however a distinction worth emphasizing even in this case: the first notation describes a deduction, that is an activity of passing from sentences to sentences, whereas AB is simply a formula made with a logical connective, implication in this case. Without an inference rule (like modus ponens in this case), there is no deduction or inference. This point is illustrated in Lewis Carroll's dialogue called "What the Tortoise Said to Achilles",[3] as well as later attempts by Bertrand Russell and Peter Winch to resolve the paradox introduced in the dialogue.

For some non-classical logics, the deduction theorem does not hold. For example, the three-valued logic of Łukasiewicz can be axiomatized as:[4]

(CA1) ⊢ A → (BA)
(LA2) ⊢ (AB) → ((BC) → (AC))
(CA3) ⊢ (¬A → ¬B) → (BA)
(LA4) ⊢ ((A → ¬A) → A) → A
(MP) A, ABB

This sequence differs from classical logic by the change in axiom 2 and the addition of axiom 4. The classical deduction theorem does not hold for this logic, however a modified form does hold, namely AB if and only if ⊢ A → (AB).[5]

Admissibility and derivability

[edit]

In a set of rules, an inference rule could be redundant in the sense that it is admissible or derivable. A derivable rule is one whose conclusion can be derived from its premises using the other rules. An admissible rule is one whose conclusion holds whenever the premises hold. All derivable rules are admissible. To appreciate the difference, consider the following set of rules for defining the natural numbers (the judgment asserts the fact that is a natural number):

The first rule states that 0 is a natural number, and the second states that s(n) is a natural number if n is. In this proof system, the following rule, demonstrating that the second successor of a natural number is also a natural number, is derivable:

Its derivation is the composition of two uses of the successor rule above. The following rule for asserting the existence of a predecessor for any nonzero number is merely admissible:

This is a true fact of natural numbers, as can be proven by induction. (To prove that this rule is admissible, assume a derivation of the premise and induct on it to produce a derivation of .) However, it is not derivable, because it depends on the structure of the derivation of the premise. Because of this, derivability is stable under additions to the proof system, whereas admissibility is not. To see the difference, suppose the following nonsense rule were added to the proof system:

In this new system, the double-successor rule is still derivable. However, the rule for finding the predecessor is no longer admissible, because there is no way to derive . The brittleness of admissibility comes from the way it is proved: since the proof can induct on the structure of the derivations of the premises, extensions to the system add new cases to this proof, which may no longer hold.

Admissible rules can be thought of as theorems of a proof system. For instance, in a sequent calculus where cut elimination holds, the cut rule is admissible.

See also

[edit]

References

[edit]
  1. ^ Boolos, George; Burgess, John; Jeffrey, Richard C. (2007). Computability and logic. Cambridge: Cambridge University Press. p. 364. ISBN 978-0-521-87752-7.
  2. ^ John C. Reynolds (2009) [1998]. Theories of Programming Languages. Cambridge University Press. p. 12. ISBN 978-0-521-10697-9.
  3. ^ Kosta Dosen (1996). "Logical consequence: a turn in style". In Maria Luisa Dalla Chiara; Kees Doets; Daniele Mundici; Johan van Benthem (eds.). Logic and Scientific Methods: Volume One of the Tenth International Congress of Logic, Methodology and Philosophy of Science, Florence, August 1995. Springer. p. 290. ISBN 978-0-7923-4383-7. preprint (with different pagination)
  4. ^ Bergmann, Merrie (2008). An introduction to many-valued and fuzzy logic: semantics, algebras, and derivation systems. Cambridge University Press. p. 100. ISBN 978-0-521-88128-9.
  5. ^ Bergmann, Merrie (2008). An introduction to many-valued and fuzzy logic: semantics, algebras, and derivation systems. Cambridge University Press. p. 114. ISBN 978-0-521-88128-9.