# -*- mode: org; -*- # #+TITLE: *Email discussions for December 1993* #+OPTIONS: ^:{} author:nil #+TAGS: From * proposals :From: jap I thought I had sent this out, but none of Harry, Juergen or Dave recalls seeing it, so I guess I did not. Rather than responding to a number of topics together, it would help to focus discussion if you would cut out the relevant topic to which to add your comments and send several messages. Sorry for the delay! --Julian. ------------------------------------------------------------------------ It was agreed that we would not make decisions on each issue at the meeting, but agree upon a proposal to address each item (from the list e-mailed beforehand and from others raised at the meeting) and review the set afterwards (for self-consistency, if nothing else!). This message contains the set of proposals for modifications to bring 0.99 to 1.0. It may be useful to read this in conjunction with Russell and Davids' meeting report of the discussion that surrounded a particular issue. 1. STATIC ERRORS MIGHT BE SIGNALLED To rename static error as violation. To note that a preparation program must issue a diagnostic if it detects a violation. To note that a preparation program must issue a diagnostic if it detects a dynamic error. If the result of preparation is a runnable program, then that program must signal any dynamic error. JAP: further revision is to rename dynamic error as static error now that the need to distinguish the two flavours has gone. 2. REMOVE: DEFMACRO MACROS RENAMED SYNTAX FUNCTIONS ADD: EXPORT SYNTAX To expand sections 9.5 and 9.7 to note that macro definitions extend the syntax environment and may be visible externally via the export-syntax directive. To define the purpose and behaviour of export-syntax in section 9. To modify 13.2.2.3 consistent with the foregoing. 3. DEFLITERAL To receive a description from HED for consideration. 4. LEVEL-0 TELOS CONDITIONS etc. Moot given proposal 1. 5. ADD To modify 13.2 6. CASE IN FORMAT DIRECTIVES Title is unlreated to proposal. To replace format with a collection of printf functions (fprintf, sprintf, eprintf, printf) and adopt ISO C syntax for format directives extended by %a. To remove scanf. HED will e-mail a more detailed write-up. 7. ADD: READ To add definition of read to A.14. 8. CLASS-SPECIFIC INPUT FUNCTIONS No changes. 9. INVARIANT GF APPLICATION BEHAVIOUR No changes. 10. METHOD DEFAULT SPECIALIZERS No changes. 11. REST ARGUMENTS in GENERIC-FUNCTIONS To add argument "rest" to B15.1-4. To add initarg 'rest to B.11.1. To add definition of generic-function-rest to B.7. To add definition of method-rest to B.8. 12. REMOVE: METHOD-LAMBDA-FUNCTION, CALL-METHOD-FUNCTION, APPLY-METHOD-FUNCTION No changes. 13. ADDITIONAL SCAN DIRECTIVES No changes. 14. DOMAIN AND RANGE OF DEEP-COPY AND SHALLOW-COPY No changes, but see 15. 15. SPECIFICATION OF RESULT TYPES To add more specific information regarding the class of the result where approporiate. 16. REMOVE: ADD: CLASS OPTION To remove references to abstract classes (note figure B.1). To add 'abstract class option in B.1.1. To add definition of abstract-class-p in B.5. 17. ADD: 'required-initargs CLASS OPTION To add requiredp as a slot option (Table 5) taking a boolean value. To replace initarg by keyword. To replace initform by default. To replace initfunction (eg. B.6.4) by default-function. 18. ADD: OPENP To add definition of openp to A.14. 19. USE 'D' OR 'E' EXPONENT MARKER? To replace d|D by e|E. 20. MODULE INITIALIZATION ORDER. To add the wording in the proposal to 9.7. 21. ARGUMENT ORDER TO (SETTER ELEMENT) No changes. 22. ADD: AND AS ABSTRACT CLASSES See 29. 23. MAKE thread-start AND thread-value GENERIC ADD: GENERIC-SIGNAL ADD: wait METHOD FOR To modify 11.1.5-6 to make these function generic and to add default methods for them. To extend 12.2 with the definition of signal-using-thread, a generic function, whose first argument is the thread on which to signal the condition. To modify 12.2.2 to reflect the use of signal-using-thread. A proposal on the wait method will be made later by RJB, DDER, NRB and JAP. 24. NON-HYGENIC SEMANTICS FOR MACROS To expand the description of syntax expansion in 9.7, in particular to enumerate some of the typical problems stemming from non-hygenic expansion. 25. STATUS OF and AND or Partially clarified by 2. Functional versions also to be defined with the same names as the macros (13.4). 26. CHARACTER NAMING CONVENTIONS To replace the extended names for special characters (eg. #\newline) by their string digram equivalents (see table A.7), eg. #\\n. 27. KEYWORDS To expand the minimal character set (table 2) to be ISO Latin 1. To add the (concrete) class , the abstract class and make a subclass of . 28. WITH-HANDLER To change the example (fig. 5) to use an externally defined generic function rather than a dynamically constructed gf and to use catch/throw instead of let/cc. 29. CLASS HIERARCHY REVISION To replace figure 3 by the (level 0) hierarchy given in RJB's message and to add a new figure in Annex B showing the full level 0 and level 1 hierarchy. To note that only abstract classes are subclassable and that (abstract) is not subclassable. To rename as . To rename as . To rename as . To rename as (and all other such references). To remove and . To replace defstruct at level-0 by defclass. Additions to the hierarchy as per RJB/DDERs' diagram. 30. POINTS RAISED BY ULRICH KRIEGEL arithmetic coercions: to add the generic function lift, to define methods on it to describe coercion consistent with "floating point" contagion, except in the case of comparison operators and to describe its interaction with the n-ary arithmetic operators. (Note: lift is not to be called by the binary generic operators). To note that coercion of a to a may overflow and that case "is an error". To add () and all the necessary methods. JAP: not clear to me how lift can work given the parenthetical remark above...unless it takes the operator to be applied as an argument. Should (+ a b c) ==> (lift a (lift b c binary+) binary+)?? condition class accessors: to define with two slots, message and message-args, where message is a format string matching the message-args. To remove all defined slots in subclasses of . To remove defcondition. 31: ADD: N-ARY COMPARATORS To expand A.3 with >, >=, <= as n-ary functions and != as a binary function. 32. COLLECTION AND SEQUENCE FUNCTIONS To add definitions of first, second, last and sort as functions. To add definitions of delete (destructive) and remove (constructive) as functions. To add the notion of explicit keys and to clarify the meaning of operations on infinite collections. To change the specification of to replace the initarg fill-value by fill-function, which is a function of two arguments taking the key and the collection. To add the class-specific operators: vector-ref, string-ref, list-ref, hash-table-ref, corresponding setters, vector-length, string-length, list-length and hash-table-size. 33. PUBLICATION To add Nitsan, Neil and Odile to the list of contributors. To transfer Greg from the editors to the contributors. 34. STREAMS A detailed proposal based on POSIX and buffered I/O will be sent by HED. 35. FILENAMES To add the (concrete) class (where??) with the external syntax #F"...". To add definitions of the functions: basename, extension, dirname, device and merge-filenames. To add a converter method from to . An additional proposal to add file and directory operations based on POSIX will be sent by HED. * proposals :From: Dave De Roure > > 25. STATUS OF and AND or > > Partially clarified by 2. Functional versions also to be defined with > the same names as the macros (13.4). > Interesting, quite a compelling example, but in the interest of not having functions with different semantics and the same names, I propose that if the functional versions are to be included they should be given different names; e.g. logical-and (or even logical-and-p!) etc, though maybe that sounds too much like bitwise operations. -- Dave * write-ups from Harley :From: Juergen Kopp 3. DEFLITERAL To receive a description from HED for consideration. 6. CASE IN FORMAT DIRECTIVES Title is unlreated to proposal. To replace format with a collection of printf functions (fprintf, sprintf, eprintf, printf) and adopt ISO C syntax for format directives extended by %a. To remove scanf. HED will e-mail a more detailed write-up. 34. STREAMS A detailed proposal based on POSIX and buffered I/O will be sent by HED. 35. FILENAMES To add the (concrete) class (where??) with the external syntax #F"...". To add definitions of the functions: basename, extension, dirname, device and merge-filenames. To add a converter method from to . An additional proposal to add file and directory operations based on POSIX will be sent by HED. Immediately breaking my own request about grouping topics...this is to say that all the write-ups from Harley mentioned in the above topics, plus some additional material on number lifting from Nitsan is available as compressed ps files from ftp.bath.ac.uk:pub/eulisp. midge $ ls -l *.ps.gz -rw-r--r-- 1 masjap 19011 Dec 3 13:01 adv-genarith.ps.gz -rw-r--r-- 1 masjap 88037 Dec 3 12:57 eulisp-proposals.ps.gz -rw-r--r-- 1 masjap 22837 Dec 3 12:58 genarith.ps.gz --Julian (at GMD). * proposals :From: Jeff Dalton > > 25. STATUS OF and AND or > > > > Partially clarified by 2. Functional versions also to be defined with > > the same names as the macros (13.4). > > Interesting, quite a compelling example, but in the interest of not having > functions with different semantics and the same names, I propose that if > the functional versions are to be included they should be given different > names; e.g. logical-and (or even logical-and-p!) etc, though maybe that > sounds too much like bitwise operations. For what it's worth, T has *AND, *OR, and *IF. -- jeff * signal :From: E. Ulrich Kriegel Hi Julian, on page 24 it is stated signal should never return. But what about if one thread signals a condition to another thread. page19: A signal on a determined thread has no discernable effect on either the signalled or signalling thread ... Is it correct, if we understand it in the following sense: if no thread is given signal should never return? greetings --ulrich * write-ups from Harley :From: Juergen Kopp 3. DEFLITERAL To receive a description from HED for consideration. 6. CASE IN FORMAT DIRECTIVES Title is unlreated to proposal. To replace format with a collection of printf functions (fprintf, sprintf, eprintf, printf) and adopt ISO C syntax for format directives extended by %a. To remove scanf. HED will e-mail a more detailed write-up. 34. STREAMS A detailed proposal based on POSIX and buffered I/O will be sent by HED. 35. FILENAMES To add the (concrete) class (where??) with the external syntax #F"...". To add definitions of the functions: basename, extension, dirname, device and merge-filenames. To add a converter method from to . An additional proposal to add file and directory operations based on POSIX will be sent by HED. Immediately breaking my own request about grouping topics...this is to say that all the write-ups from Harley mentioned in the above topics, plus some additional material on number lifting from Nitsan is available as compressed ps files from ftp.bath.ac.uk:pub/eulisp. midge $ ls -l *.ps.gz -rw-r--r-- 1 masjap 19011 Dec 3 13:01 adv-genarith.ps.gz -rw-r--r-- 1 masjap 88037 Dec 3 12:57 eulisp-proposals.ps.gz -rw-r--r-- 1 masjap 22837 Dec 3 12:58 genarith.ps.gz --Julian. * write-ups from Harley :From: Jeff Dalton > 6. CASE IN FORMAT DIRECTIVES > > Title is unlreated to proposal. To replace format with a collection > of printf functions (fprintf, sprintf, eprintf, printf) and adopt ISO > C syntax for format directives extended by %a. To remove scanf. HED > will e-mail a more detailed write-up. This strikes me as a totally bizarre move, as if people would be converting C programs directly to EuLisp if only an incompatible format syntax didn't get in the way. So, in this spirit, let me repeat my suggestion on the Dylan list that the syntax of (car z) should be z->p.car ("p" for "pair", you see). Furthermore, we should distinguish between FILE *s and fds, should replace the AND and OR macros by && and || respectively, and should remove dangerous orthogonality by distinguishing between statements and expressions. Strings should be removed from the language, and programmers should be required to specify an explicit size whenever they want to, say, concat the referents of two char *s. Don't forget to call free when you're done. Several other randomly selected C features should be added as well, to complete the transformation; then we can take, say, one additional feature from C++, such as abandoning object identity when there's multiple-inheritance. Once we've made these important changes, we should immediately publish the 1.0 definition, since the main obstacles to the wide acceptance of EuLisp will have been removed. -- jd * write-ups from Harley :From: Harley Davis Date: Fri, 3 Dec 93 17:33:19 GMT :From: Jeff Dalton > 6. CASE IN FORMAT DIRECTIVES > > Title is unlreated to proposal. To replace format with a collection > of printf functions (fprintf, sprintf, eprintf, printf) and adopt ISO > C syntax for format directives extended by %a. To remove scanf. HED > will e-mail a more detailed write-up. This strikes me as a totally bizarre move, as if people would be converting C programs directly to EuLisp if only an incompatible format syntax didn't get in the way. So, in this spirit, let me repeat my suggestion on the Dylan list that the syntax of (car z) should be z->p.car ("p" for "pair", you see). Furthermore, we should distinguish between FILE *s and fds, should replace the AND and OR macros by && and || respectively, and should remove dangerous orthogonality by distinguishing between statements and expressions. Strings should be removed from the language, and programmers should be required to specify an explicit size whenever they want to, say, concat the referents of two char *s. Don't forget to call free when you're done. Several other randomly selected C features should be added as well, to complete the transformation; then we can take, say, one additional feature from C++, such as abandoning object identity when there's multiple-inheritance. Once we've made these important changes, we should immediately publish the 1.0 definition, since the main obstacles to the wide acceptance of EuLisp will have been removed. If you look at the FORMAT function in EuLisp, you will see that it's not much more powerful than printf. However, it does introduce a new set of directives which will have to be learned by programmers coming from outside Lisp, and they will not see the point. In addition, FORMAT has to be implemented independently of an already existing and standard set of functions, while a printf in EuLisp can be implemented in part using sprintf, especially for annoying floating point formatting. This will reduce application size and promote integration and output consistency between mixed Lisp/C applications. For formatted output, we can either try to be slightly consistent with CommonLisp or with C. I really don't see why we would choose CommonLisp in this case, since the subset we already chose isn't obviously better than what C provides. Also, the printf we're proposing is somewhat better than in C since it does error checking and won't give you a segmentation violation if you screw up the types or the number of arguments. Perhaps you could answer the question "Why not use printf instead of inventing a new sublanguage which is only familiar to small minority of programmers?" Look, nobody's going to propose abandoning Lisp in favor of C semantics. However, the non-Lisp world does have a certain number of standards for which we can only provide incremental improvements. Why not use them? -- Harley * write-ups from Harley :From: Jeff Dalton > If you look at the FORMAT function in EuLisp, you will see that it's > not much more powerful than printf. However, it does introduce a new > set of directives which will have to be learned by programmers coming > from outside Lisp, and they will not see the point. Give me a break! Every time someone moves from one language to another they have to learn some different ways of doing the same thing. Programmers can deal with this. But no! When it comes to format, suddenly it's all too much. "I don't see the point!", they cry. "What a ridiculous imposition!" "That's right!, the answer comes. "Those bastards! I will never use their language, the scum!" "Yeah! They can't get away with this. We'll show them!" > In addition, > FORMAT has to be implemented independently of an already existing and > standard set of functions, Haven't you noticed, Harley? Printf doesn't have a clue about outputting Lisp data. The only way to make it compatible is to make it too wimpy to use. > while a printf in EuLisp can be implemented > in part using sprintf, especially for annoying floating point > formatting. Franz Lisp used to do that, and it didn't call the Lisp function printf. > This will reduce application size No it won't. (See Franz.) > and promote integration > and output consistency between mixed Lisp/C applications. So would 1000 other things which we're not going to do. Picking this one thing seems completely off the wall to me. Especially now. We had years in which to do this, if it was so important. > For formatted output, we can either try to be slightly consistent with > CommonLisp or with C. I really don't see why we would choose > CommonLisp in this case, since the subset we already chose isn't > obviously better than what C provides. An attempt to exploit anti-Common lisp feeling will get nowhere with me, as you must know. Besides, format predated Common Lisp. In any case, you're not even talking about redoing I/O to be compatible with C generally, you're talking about a minute part of the language that won't be fully compatible in any case. It looks like C++ has got Lisp folk running so scared that they're starting to consider irrational ways of attracting C progrmmers. If you want to make a case that certain small changes will make a big difference, you should show that this really is the case and indentify the changes that will do the trick. > Perhaps you could answer the question "Why not use printf instead of > inventing a new sublanguage which is only familiar to small minority > of programmers?" This minute sublanguage, in a familiar form, will present little problem to programmers. if you have some evidence that calling it printf will make a huge difference, let's have it. > Look, nobody's going to propose abandoning Lisp in favor of C > semantics. However, the non-Lisp world does have a certain number of > standards for which we can only provide incremental improvements. Why > not use them? Printf is not a standard for any language but C. If you can make a case that this particular change will make a big difference, I'll be happy to make it. But don't tell me there's a general non-Lisp-World standard that we ought to respect, and that the burden of proof is therefore on me, because it's just not so. If you want to compete with C and C++, design a better language than C or C++. If you have to resort to removing micro-irritants, you've alreay lost. -- jd * proposals (AND and OR) :From: jpff Message written at Fri Dec 3 21:13:50 GMT 1993 I do agree with Dave that having functions and macros with the same name does sound like being deliberately perverse. The names "logical-and" are the ones I am used to as bitwise -- I realise this is not very logical..... ==John * write-ups from Harley :From: Harley Davis Date: Sun, 5 Dec 93 23:58:13 GMT :From: Jeff Dalton > If you look at the FORMAT function in EuLisp, you will see that it's > not much more powerful than printf. However, it does introduce a new > set of directives which will have to be learned by programmers coming > from outside Lisp, and they will not see the point. Give me a break! Every time someone moves from one language to another they have to learn some different ways of doing the same thing. Programmers can deal with this. But no! When it comes to format, suddenly it's all too much. "I don't see the point!", they cry. "What a ridiculous imposition!" "That's right!, the answer comes. "Those bastards! I will never use their language, the scum!" "Yeah! They can't get away with this. We'll show them!" It's getting somewhat difficult to discuss this in a rational way when you blow up at any response. This discussion would be easier if you would just present your arguments without exagerating the other side and inventing straw men. In any case, I think you have misunderstood the point of this proposal. It is not meant to attract C/C++ programmers away from C/C++. It is meant to recognize the fact that Lisp's role in the future will necessarily be as a complement to C/C++ -- indeed, that is already the case today -- and so we should, wherever possible, simplify the life of the programmer who will use both languages together. If you look at the entire set of proposals which I sent to Julian and which are available by ftp from Bath, you will see not just printf but also a new proposal for filenames, file operations, and streams. These proposals are based on POSIX file operations and the stream system is meant to be compatible with POSIX buffered stream operations, with the addition of a higher level of functionality for reading and printing Lisp objects. (If you had read the printf proposal, you would have seen that the proposed printf also has a new directive for handling Lisp objects and treats %s reasonably for Lisp objects.) So printf is not an isolated case. > In addition, > FORMAT has to be implemented independently of an already existing and > standard set of functions, Haven't you noticed, Harley? Printf doesn't have a clue about outputting Lisp data. The only way to make it compatible is to make it too wimpy to use. Ours does. > while a printf in EuLisp can be implemented > in part using sprintf, especially for annoying floating point > formatting. Franz Lisp used to do that, and it didn't call the Lisp function printf. But *why not* call it printf? What's your argument, Jeff? > This will reduce application size No it won't. (See Franz.) Well, it does in our system. If Franz implemented things in a losing way, it's not the fault of this specification. > and promote integration > and output consistency between mixed Lisp/C applications. So would 1000 other things which we're not going to do. Picking this one thing seems completely off the wall to me. Especially now. We had years in which to do this, if it was so important. It's not just one thing. If there are other things aside from those proposed which would help mixed language applications, I would be interested in hearing about them. I hope other EuLispers would too. (I don't think adopting C syntax or getting rid of garbage collection helps anybody.) > For formatted output, we can either try to be slightly consistent with > CommonLisp or with C. I really don't see why we would choose > CommonLisp in this case, since the subset we already chose isn't > obviously better than what C provides. An attempt to exploit anti-Common lisp feeling will get nowhere with me, as you must know. Besides, format predated Common Lisp. I'm not trying to exploit anti-CommonLisp feeling. I'm simply making the completely empirical point that more programmers know printf than format; printf and EuLisp's format are basically equivalent in power; we should go with what more people know. That's the argument. In any case, you're not even talking about redoing I/O to be compatible with C generally, you're talking about a minute part of the language that won't be fully compatible in any case. Please read the entire proposal before making such assumptions. It looks like C++ has got Lisp folk running so scared that they're starting to consider irrational ways of attracting C progrmmers. If you want to make a case that certain small changes will make a big difference, you should show that this really is the case and indentify the changes that will do the trick. You are wrong. > Perhaps you could answer the question "Why not use printf instead of > inventing a new sublanguage which is only familiar to small minority > of programmers?" This minute sublanguage, in a familiar form, will present little problem to programmers. if you have some evidence that calling it printf will make a huge difference, let's have it. A huge difference in what? Again you seem to be assuming that all this is just some sort of trick to attract C programmers. But that's just not true. > Look, nobody's going to propose abandoning Lisp in favor of C > semantics. However, the non-Lisp world does have a certain number of > standards for which we can only provide incremental improvements. Why > not use them? Printf is not a standard for any language but C. If you can make a case that this particular change will make a big difference, I'll be happy to make it. But don't tell me there's a general non-Lisp-World standard that we ought to respect, and that the burden of proof is therefore on me, because it's just not so. Most programmers who will be likely EuLisp users know C already. And there is a general non-Lisp world standard whose name is POSIX. This standard does specify a certain number of operations including printf. If you want to compete with C and C++, design a better language than C or C++. If you have to resort to removing micro-irritants, you've alreay lost. Not compete, co-operate. Competing is a sure way to lose; co-operating intelligently will provide Lisp its appropriate ecological niche. -- Harley * write-ups from Harley :From: Jeff Dalton > It's getting somewhat difficult to discuss this in a rational way when > you blow up at any response. You think that's blowing up? You've misunderstood me. Do I really have to put in funny little character sequences everywhere? > This discussion would be easier if you > would just present your arguments without exagerating the other side > and inventing straw men. If I've misunderstood ("invented") your arguments, perhaps it's because they weren't sufficiently clear. > In any case, I think you have misunderstood the point of this > proposal. It is not meant to attract C/C++ programmers away from > C/C++. It is meant to recognize the fact that Lisp's role in the > future will necessarily be as a complement to C/C++ -- indeed, that is > already the case today -- and so we should, wherever possible, > simplify the life of the programmer who will use both languages > together. And I still think this is a completely trivial move in that direction. In any case, you *are* trying to attract C and C++ programmers, whether "away from C and C++" or not. Moreover, if they use Lisp at all, they will be moving away from C and C++ to that extent. Right now, their Lisp usage tends to be zero. > If you look at the entire set of proposals which I sent to Julian and > which are available by ftp from Bath, you will see not just printf but > also a new proposal for filenames, file operations, and streams. > These proposals are based on POSIX file operations and the stream > system is meant to be compatible with POSIX buffered stream > operations, with the addition of a higher level of functionality for > reading and printing Lisp objects. (If you had read the printf > proposal, you would have seen that the proposed printf also has a new > directive for handling Lisp objects and treats %s reasonably for Lisp > objects.) So printf is not an isolated case. So you agree that the printf change is not justified on its own, contrary to how it appeared in your previous message. I don't mind being compatible with "buffered streams", though one of the advantages of Lisp used to be that it was not so tied to details at that level as C. But compatible ought to cover a very wide range, so why it requires printf is not clear. In any case, replacing the I/O system at this point requires more in the way of justification than I have seen. For instance, to what extent will C++ I/O be able to use EuLisp streams directly? BTW, I knew it would have a "new" (to C, not to EuLisp) directive for printing Lisp objects. > Haven't you noticed, Harley? Printf doesn't have a clue about > outputting Lisp data. The only way to make it compatible is to > make it too wimpy to use. > > Ours does. Then it's not the same as the one in C. Only the wimpy one is the same. If it's "compatible" (which covers a multitude of sins) you should say how. You should at least check whether the C standard allows extensions of the required sort. > > while a printf in EuLisp can be implemented > > in part using sprintf, especially for annoying floating point > > formatting. > > Franz Lisp used to do that, and it didn't call the Lisp function > printf. > > But *why not* call it printf? What's your argument, Jeff? If there's no good reason to call it printf, then why do it? If you propose a change, you ought to accept the burden of proof. I think calling it printf looks silly, won't impress C programmers, is a gratuitous change from existing Lisp practice, is inconsistent with naming and syntax conventions in the rest of EuLisp, and is being proposed so late in the day that we won't have time to deal with any unfortunate consequences before we puiblish the definition. > > This will reduce application size > > No it won't. (See Franz.) > > Well, it does in our system. If Franz implemented things in a losing > way, it's not the fault of this specification. I don't think you're making much effort to understand me. Franz shows (IMHO) that you can have the same benefits (being discussed at this point) without making the change you suggest. Therefore the additional benefits of the change are zero. > > and promote integration > > and output consistency between mixed Lisp/C applications. > > So would 1000 other things which we're not going to do. Picking > this one thing seems completely off the wall to me. Especially now. > We had years in which to do this, if it was so important. > > It's not just one thing. If there are other things aside from those > proposed which would help mixed language applications, I would be > interested in hearing about them. I hope other EuLispers would too. I'd be interested in hearing about them, and if they form a coherent package in which printf makes sense, that will be a strong point in favor of printf. > (I don't think adopting C syntax or getting rid of garbage collection > helps anybody.) How about FILE *s? :-> > > For formatted output, we can either try to be slightly consistent with > > CommonLisp or with C. I really don't see why we would choose > > CommonLisp in this case, since the subset we already chose isn't > > obviously better than what C provides. > > An attempt to exploit anti-Common lisp feeling will get nowhere with > me, as you must know. Besides, format predated Common Lisp. > > I'm not trying to exploit anti-CommonLisp feeling. Then why present it as a choice between compatibility with CL and compatibility with C? > I'm simply making > the completely empirical point that more programmers know printf than > format; printf and EuLisp's format are basically equivalent in power; > we should go with what more people know. That's the argument. It's a very general argument being applied very selectively. So the case-specific arguments must be the decisive ones. What are they? > In any case, you're not even talking about redoing I/O to be compatible > with C generally, you're talking about a minute part of the language > that won't be fully compatible in any case. > > Please read the entire proposal before making such assumptions. Tell me why it's compatible with C in a suffucuently useful sense. Common Lisp I/O is compatible with C to the extent that it can be implemented in C and hence can be regarded as an extension. > It looks like C++ has got Lisp folk running so scared that they're > starting to consider irrational ways of attracting C progrmmers. > If you want to make a case that certain small changes will make > a big difference, you should show that this really is the case and > indentify the changes that will do the trick. > > You are wrong. About what? That you ought to show that the changes will make a big difference? As for the rest, see above. You are trying to attract C and C++ programmers. > > Perhaps you could answer the question "Why not use printf instead of > > inventing a new sublanguage which is only familiar to small minority > > of programmers?" > > This minute sublanguage, in a familiar form, will present little > problem to programmers. if you have some evidence that calling > it printf will make a huge difference, let's have it. > > A huge difference in what? Again you seem to be assuming that all > this is just some sort of trick to attract C programmers. But that's > just not true. Will it make a huge difference in *anything*? If it's not to attract C++ programmers, why not use the Algol 68 name? > > Look, nobody's going to propose abandoning Lisp in favor of C > > semantics. However, the non-Lisp world does have a certain number of > > standards for which we can only provide incremental improvements. Why > > not use them? > > Printf is not a standard for any language but C. If you can make a > case that this particular change will make a big difference, I'll > be happy to make it. But don't tell me there's a general non-Lisp-World > standard that we ought to respect, and that the burden of proof is > therefore on me, because it's just not so. > > Most programmers who will be likely EuLisp users know C already. How do you know? Maybe C programmers will want nothing to do with it. > And there is a general non-Lisp world standard whose name is POSIX. This > standard does specify a certain number of operations including printf. I have used a wide range of programming languages, and none of them except C use printf. That it's in a lower-level standard is beside the point. Languages should be independent of operating systems and the like. Now, if you want to propose that we have all POSIX calls in a library, maybe that makes sense. > If you want to compete with C and C++, design a better language than > C or C++. If you have to resort to removing micro-irritants, you've > alreay lost. > > Not compete, co-operate. Competing is a sure way to lose; > co-operating intelligently will provide Lisp its appropriate > ecological niche. I'm sorry, but if you want C and C++ programmers to use EuLisp for anything at all you're going to have to compete with the languages they would use otherwise, namely C and C++. Removing micro-irritants is not an effective way to do this. -- jeff * write-ups from Harley :From: Richard Tobin > there is a general non-Lisp world standard whose name is POSIX. This > standard does specify a certain number of operations including printf. This seems irrelevant to me. POSIX isn't a standard for Lisp. And the standard it provides for printf is a standard for printing C data types. And this seems to provide an argument *against* calling the Lisp function printf: if an implementation wants to provide access to the real printf, for use with actual C data, it will have to call it something else, or else have functions in two modules with the same name. Indeed, I would suggest a rule of adopting names that are *different* from those of any POSIX functions, at least where it's not too inconvenient. -- Richard * write-ups from Harley :From: Richard Tobin > You should at least check whether the C standard > allows extensions of the required sort. C reserves new %-lower-case-letter specifiers for future use. Other characters "may be used in extensions". So if we adopt printf, we should not use %a (maybe %A?). -- Richard * proposals (AND and OR) :From: Jeff Dalton > Message written at Fri Dec 3 21:13:50 GMT 1993 > > I do agree with Dave that having functions and macros with the same name > does sound like being deliberately perverse. The names "logical-and" > are the ones I am used to as bitwise -- I realise this is not very > logical..... > > ==John The more I think about this, the more I think the idea of AND and OR functions is a mistake. The times when they're the right thing are fairly rare. I don't think I've ever encountered one. Moreover, there are a number of other, more general, operations that can easily handle the cases handled directly by AND and OR functions. (I'm thinking of MEMBER, SOME, EVERY, various loop constructs, etc.) However, if names are wanted, how about ALL-TRUE and ALL-FALSE, with (COMPLEMENT ALL-FALSE) serving as OR? (If we don't have COMPLEMENT, I think we should. Since we have functional values, let's take advantage of them.) -- jeff * write-ups from Harley :From: Jeff Dalton > the standard it provides for printf is a standard for printing C data > types. BTW, do we have null-terminated strings in EuLisp? I think it would be a good idea. Also a way to test for the null char. -- jeff * printf :From: Harley Davis In article Jeff Dalton writes: > This discussion would be easier if you > would just present your arguments without exagerating the other side > and inventing straw men. If I've misunderstood ("invented") your arguments, perhaps it's because they weren't sufficiently clear. I made the arguments at the EuLisp meeting, where they were generally accepted. You responded (rather vehemently) to a posting by Julian in which he merely listed the proposals from the meeting without any arguments at all. So you can't really complain that the arguments were insufficiently clear; it's not my fault you weren't at the meeting, and you never asked for the arguments - you just responded to what you thought were the arguments. Now, if you want me to restate the argument as clearly as possible, I will do so once again. Here it is, as I believe I stated it during the meeting: POSIX provides a certain number of services which, as services, are more or less sufficient for a large number of tasks, and they are fairly well-known among the programmers who are likely to use EuLisp. Therefore, when we want to provide a service in EuLisp which has an analogue in POSIX, we should provide a binding to the equivalent POSIX functions. In addition, because Lisp programmers expect better error handling and a simpler interface, we should add in error handling and take advantage of existing EuLisp types when providing such a binding. As a concrete example of this reasoning, I propose replacing the existing EuLisp format (which in any case is basically printf with renamed directives) with the POSIX printf, plus various improvements. Additionally, stream and file operations can be based on their POSIX equivalents. In the case of files, this binding is fairly straightforward; in the case of streams, it is more complicated because FILE*'s and fd's are insufficient for a number of reasons (not least is that they don't support READ/PRINT very well). However, it is at least possible to have stream operations which are explicitly buffered in a way compatible with FILE*'s, and in fact we can provide a reasonable level of genericity in streams by defining a generic protocol over this buffering. A demonstration specification and implementation is provided by Ilog Talk, which has taken this approach. There is the argument in its complete form. Now you can tell me I am insufficiently clear, if you think that is the case. > In any case, I think you have misunderstood the point of this > proposal. It is not meant to attract C/C++ programmers away from > C/C++. It is meant to recognize the fact that Lisp's role in the > future will necessarily be as a complement to C/C++ -- indeed, that is > already the case today -- and so we should, wherever possible, > simplify the life of the programmer who will use both languages > together. And I still think this is a completely trivial move in that direction. In any case, you *are* trying to attract C and C++ programmers, whether "away from C and C++" or not. Moreover, if they use Lisp at all, they will be moving away from C and C++ to that extent. Right now, their Lisp usage tends to be zero. No, I still must disagree. This move is not at all designed to attract C and C++ programmers (especially the latter). It is designed to make the language more homogenous with the de facto standards in the environments in which it will likely be used, and therefore make life easier for those programmers which have chosen to use it. This argument makes no reference at all to the reasons why a C/C++ programmer might choose to use EuLisp. In addition to this argument, I think the general POSIX move does in fact make EuLisp more attractive to those programmers, and thus as a side-effect can help attract these programmers, but it also helps those who are primarily Lisp programmers who in any case also have to use C/C++ for any serious work. If you want to know what I think will attract C/C++ programmers, I would rather cite EuLisp's real advantages: GC, macros, better object system, interactive environment, etc., which lead to greater productivity, plus of course its advantages compared other high-level dynamic languages such as CL, Dylan, Python, Tcl or whatever. If you as a EuLisp marketer thought that supplying a POSIX binding would convince some individual, you could also bring that out, but it's not the primary intention. Like you, Jeff, I would hope that everything we add to the language is purely to make it a better language, and not some marketing trick (like Dylan's alternative syntax). I really do believe that printf is better than some mutant format which is in any case based on printf. I also believe that basing functionality on a standard when we can't do much better is also good for the language since it makes it simpler to implement and specify and easier to learn. This would all be different if we had some great, really winning ideas for streams, filenames, and formatting. But this isn't the case. (I would also point out in passing that when we introduced scanf you didn't raise hell. Why not? What if we had proposed printf at that point? I can only suspect that something non-technical is bothering you about this idea.) > If you look at the entire set of proposals which I sent to Julian and > which are available by ftp from Bath, you will see not just printf but > also a new proposal for filenames, file operations, and streams. > These proposals are based on POSIX file operations and the stream > system is meant to be compatible with POSIX buffered stream > operations, with the addition of a higher level of functionality for > reading and printing Lisp objects. (If you had read the printf > proposal, you would have seen that the proposed printf also has a new > directive for handling Lisp objects and treats %s reasonably for Lisp > objects.) So printf is not an isolated case. So you agree that the printf change is not justified on its own, contrary to how it appeared in your previous message. Since you were responding to the proposal, I had assumed that you had read it in its entiriety and that you had somehow learned of the general argument behind it and the further ramifications. Apparently this wasn't the case, so we have to back up. I don't mind being compatible with "buffered streams", though one of the advantages of Lisp used to be that it was not so tied to details at that level as C. But compatible ought to cover a very wide range, so why it requires printf is not clear. In any case, replacing the I/O system at this point requires more in the way of justification than I have seen. For instance, to what extent will C++ I/O be able to use EuLisp streams directly? C++ streams are not compatible with EuLisp streams. But so what? I think C++ streams are losing, and I wouldn't want to propose something like them for EuLisp. > Haven't you noticed, Harley? Printf doesn't have a clue about > outputting Lisp data. The only way to make it compatible is to > make it too wimpy to use. > > Ours does. Then it's not the same as the one in C. Only the wimpy one is the same. If it's "compatible" (which covers a multitude of sins) you should say how. You should at least check whether the C standard allows extensions of the required sort. Read the proposal and see how. I don't see whether it matters if the POSIX standard allows extensions of the required sort or not. We aren't bound by the standard, but I believe it behooves us to follow it to the extent that it is reasonable. > > while a printf in EuLisp can be implemented > > in part using sprintf, especially for annoying floating point > > formatting. > > Franz Lisp used to do that, and it didn't call the Lisp function > printf. > > But *why not* call it printf? What's your argument, Jeff? If there's no good reason to call it printf, then why do it? If you propose a change, you ought to accept the burden of proof. I think calling it printf looks silly, I disagree, I think the name by itself makes almost no difference at all. won't impress C programmers, disagree; all the C/C++ programmers here think it's good. is a gratuitous change from existing Lisp practice, it's not gratuitous is inconsistent with naming and syntax conventions in the rest of EuLisp, (except scanf and the other proposed POSIX bindings) and is being proposed so late in the day that we won't have time to deal with any unfortunate consequences before we puiblish the definition. How many unfortunate consequences could it have? Check out the current EuLisp format and tell me how replacing that with the proposed printf could possibly have unfortunate consequences. They're basically the same. (OK, there's no equivalent to ~& or ~r in printf. I can live with it, or they can be added. On the other hand, there's no way to specify field widths or right justification for certain directives in the current EuLisp format.) If you trust our experience with Talk, I can assure you that there are no hidden problems. > > and promote integration > > and output consistency between mixed Lisp/C applications. > > So would 1000 other things which we're not going to do. Picking > this one thing seems completely off the wall to me. Especially now. > We had years in which to do this, if it was so important. > > It's not just one thing. If there are other things aside from those > proposed which would help mixed language applications, I would be > interested in hearing about them. I hope other EuLispers would too. I'd be interested in hearing about them, and if they form a coherent package in which printf makes sense, that will be a strong point in favor of printf. You're the one who said there are 1000 other things we could do. So let's hear about some of them. I already proposed a certain number which Julian has kindly made ftable. > (I don't think adopting C syntax or getting rid of garbage collection > helps anybody.) How about FILE *s? :-> Streams are better. FILE*'s are losing because they don't do everything that fd's can do. Fd's are losing because they aren't objects and they aren't buffered. It's also nice to able to subclass streams and do Lisp object I/O on them. C++ streams are losing because they use too much state for controlling anything beyond the most trivial uses. > > For formatted output, we can either try to be slightly consistent with > > CommonLisp or with C. I really don't see why we would choose > > CommonLisp in this case, since the subset we already chose isn't > > obviously better than what C provides. > > An attempt to exploit anti-Common lisp feeling will get nowhere with > me, as you must know. Besides, format predated Common Lisp. > > I'm not trying to exploit anti-CommonLisp feeling. Then why present it as a choice between compatibility with CL and compatibility with C? Of course, we could be consistent with some language even more marginal than CL, or (as we did) invent our own incompatible format. I just assumed that these were bad choices and not worth considering, but perhaps I was wrong. If someone were to propose a format that was really, really better than printf, than I would certainly listen. > I'm simply making > the completely empirical point that more programmers know printf than > format; printf and EuLisp's format are basically equivalent in power; > we should go with what more people know. That's the argument. It's a very general argument being applied very selectively. So the case-specific arguments must be the decisive ones. What are they? See above. > In any case, you're not even talking about redoing I/O to be compatible > with C generally, you're talking about a minute part of the language > that won't be fully compatible in any case. > > Please read the entire proposal before making such assumptions. Tell me why it's compatible with C in a suffucuently useful sense. Common Lisp I/O is compatible with C to the extent that it can be implemented in C and hence can be regarded as an extension. Here's how: some_fn(int x) { printf("fib(%d) = %d\n", x, fib(x)); } vs. one of: (defun some-fn (x) (printf "fib(%d) = %d\n" x (fib x))) or (defun some-fn (x) (format t "fib(~d) = ~d~%" x (fib x))) The second Lisp form has no objective advantages over the first. The first is likely to be understood and appreciated by 95% of Unix/Windows programmers whatever their background. The second is understood immediately only by experienced Lispers, who in any case also understand the first form because they've all programmed in C by now. I have used a wide range of programming languages, and none of them except C use printf. That it's in a lower-level standard is beside the point. Languages should be independent of operating systems and the like. Now, if you want to propose that we have all POSIX calls in a library, maybe that makes sense. The language *is* independent of OS's and the like. However, we're talking about the I/O library, which is necessarily tied to some notion of its operating environment. It just so happens that POSIX provides such a notion which is both familiar to most programmers and quite portable across the vast majority of platforms on which EuLisp will be used. > If you want to compete with C and C++, design a better language than > C or C++. If you have to resort to removing micro-irritants, you've > alreay lost. > > Not compete, co-operate. Competing is a sure way to lose; > co-operating intelligently will provide Lisp its appropriate > ecological niche. I'm sorry, but if you want C and C++ programmers to use EuLisp for anything at all you're going to have to compete with the languages they would use otherwise, namely C and C++. Removing micro-irritants is not an effective way to do this. I completely disagree. It is possible to present Lisp as a complement to C/C++; in fact, this seems to be by far the most successful way to present it. It is unnecessary to compete; both languages have a place. Removing any micro-irritant for which we don't provide an obvious better alternative also helps, just a little in each case, but still some. So I repeat: Why not? I have presented a detailed argument along with a concrete, implemented proposal. It has pleased both our programmers and the clients we've shown it to. What argument is there for holding on to the status quo? -- Harley * printf :From: Jeff Dalton > If I've misunderstood ("invented") your arguments, perhaps it's > because they weren't sufficiently clear. > > I made the arguments at the EuLisp meeting, where they were generally > accepted. You responded (rather vehemently) to a posting by Julian in > which he merely listed the proposals from the meeting without any > arguments at all. So you can't really complain that the arguments > were insufficiently clear; it's not my fault you weren't at the > meeting, and you never asked for the arguments - you just responded to > what you thought were the arguments. My original message didn't respond to arguments; it responded to a proposal. My second message responded to things you said in reply, which seems fair enough to me. Meetings sometimes make incorrect decisions and sometimes follow a mistaken line of reasoning. The first few times I heard or read that format might be changed to printf, I thought it was kind of silly but didn't think much of it. After all, we've long had \n in there. When it appeared again in a message from Juergen Kopp, it suddenly struck me that it was a bizarre change to make, hence my message. Taking a little bit of C and putting it into EuLisp still seems bizarre to me, even if it's also a little bit of POSIX. Now, here is the proposal I've seen: To replace format with a collection of printf functions (fprintf, sprintf, eprintf, printf) and adopt ISO C syntax for format directives extended by %a. To remove scanf. HED will e-mail a more detailed write-up. In addition, there's a long write-up from you, which seems to contain sections of the Ilog Talk documentation. I don't know how much of this is actually being proposed, but some of it looks like pretty standard Lisp stuff, while some other parts seems to have efficiency advantages. The printf change is in a third cartegory that I find more questionable. I can see the point of being able to call the same routines in several different languages, but the case for similar routines with the same name is (shall we say) less clear. > Now, if you want me to restate the argument as clearly as possible, I > will do so once again. Here it is, as I believe I stated it during > the meeting: > > POSIX provides a certain number of services which, as services, are > more or less sufficient for a large number of tasks, and they are > fairly well-known among the programmers who are likely to use EuLisp. There's an implicit decision here about who these programmers are. It looks to me like it's pretty close in extension to "C programmers". C is the only language I know that uses printf, for example. Moreover, it doesn't seem to include programmers like me who are far more familiar with Lisp conventions. Changing the conventions for the directive string from ~ to % etc will actually discourage me from using Eulisp. > Therefore, when we want to provide a service in EuLisp which has an > analogue in POSIX, we should provide a binding to the equivalent POSIX > functions. It looks like you're not providing a binding but rather a different (albeit similar) function. > In addition, because Lisp programmers expect better error > handling and a simpler interface, we should add in error handling and > take advantage of existing EuLisp types when providing such a binding. > As a concrete example of this reasoning, I propose replacing the > existing EuLisp format (which in any case is basically printf with > renamed directives) with the POSIX printf, plus various improvements. > Additionally, stream and file operations can be based on their POSIX > equivalents. In the case of files, this binding is fairly > straightforward; in the case of streams, it is more complicated > because FILE*'s and fd's are insufficient for a number of reasons (not > least is that they don't support READ/PRINT very well). However, it > is at least possible to have stream operations which are explicitly > buffered in a way compatible with FILE*'s, and in fact we can provide > a reasonable level of genericity in streams by defining a generic > protocol over this buffering. A demonstration specification and > implementation is provided by Ilog Talk, which has taken this > approach. If I were using C and Eulisp together, I'd want to know several things. Can I pass EuLisp strings and streams directly to C functions? Do Eulisp std{in,out} and C std{in,out} stay in step? Do they share buffers? Now it looks to me like you're diverging from the POSIX routines in some significant ways, thought it's not clear exactly what they are. But once you diverge, having the same names is a somewhat mixed blessing. > There is the argument in its complete form. Now you can tell me I am > insufficiently clear, if you think that is the case. It's reasonably clear, but it doesn't look very different from what I thought the argument was. > In any case, you *are* trying to attract C and C++ programmers, > whether "away from C and C++" or not. Moreover, if they use Lisp > at all, they will be moving away from C and C++ to that extent. > Right now, their Lisp usage tends to be zero. > > No, I still must disagree. This move is not at all designed to > attract C and C++ programmers (especially the latter). It is designed > to make the language more homogenous with the de facto standards in > the environments in which it will likely be used, But you won't actually conform to the standards. > and therefore make life easier for those programmers which have > chosen to use it. It doesn't make life easier for me. There's a class of programmers, which doesn't include programmers like me, that's going to get this easier life. If this isn't designed to attract them, then I'm baffled as to what aim it has; and I don't see the point in debating whether it's exactly C and C++ programmers or some similar group. Nonetheless, printf will be more familiar to C programmers than to anyone else. > This argument makes no reference at all to the reasons why a C/C++ > programmer might choose to use EuLisp. Not explicitly; but see my previous paragraph. > In addition to this argument, > I think the general POSIX move does in fact make EuLisp more > attractive to those programmers, and thus as a side-effect can help > attract these programmers, but it also helps those who are primarily > Lisp programmers who in any case also have to use C/C++ for any > serious work. Well, I often have to use C together with Lisp, and changing the format conventions makes me less inclined to use EuLisp. But perhaps I don't use C enough. > If you want to know what I think will attract C/C++ programmers, I > would rather cite EuLisp's real advantages: GC, macros, better object > system, [...] Ok. > [...] If you as a EuLisp marketer thought that supplying a > POSIX binding would convince some individual, you could also bring > that out, but it's not the primary intention. It seems to be that the only available plausible aims are to attract some programmers who would not otherwise use Eulisp or to make life easier for some people who would use Eulisp anyway. Are you saying it's the latter? > Like you, Jeff, I would hope that everything we add to the language is > purely to make it a better language, and not some marketing trick I never thought it was a marketing trick. I thought it was being proposed in good faith. I'm sorry if my presentation made that unclear. > I really do believe that printf is > better than some mutant format which is in any case based on printf. Was it based on printf? > I also believe that basing functionality on a standard when we can't > do much better is also good for the language since it makes it simpler > to implement and specify and easier to learn. But how much of existing printf can we use? We'll want to print many things that aren't handled by printf, including numbers of sorts not known in C. > This would all be different if we had some great, really winning ideas > for streams, filenames, and formatting. But this isn't the case. > (I would also point out in passing that when we introduced scanf you > didn't raise hell. Why not? What if we had proposed printf at that > point? I can only suspect that something non-technical is bothering > you about this idea.) I always thought that having scanf was kind of silly. I don't know when it was introduced. Maybe I wasn't there. I didn't even think much of the printf change at first. Anyway, I have both technical and nontechnical reservarions, as I've tried to indicate. > Since you were responding to the proposal, I had assumed that you had > read it in its entiriety and that you had somehow learned of the > general argument behind it and the further ramifications. Apparently > this wasn't the case, so we have to back up. Had people read it in its entirety before discussing it at the meeting? That wasn't my impression, though I admit I have it second hand. So I assumed that the notes about the meeting correctly reported what was being proposed and that this didn't necessarily coincide with your long paper. > > Haven't you noticed, Harley? Printf doesn't have a clue about > > outputting Lisp data. The only way to make it compatible is to > > make it too wimpy to use. > > > > Ours does. > > Then it's not the same as the one in C. Only the wimpy one is > the same. If it's "compatible" (which covers a multitude of sins) > you should say how. You should at least check whether the C standard > allows extensions of the required sort. > > Read the proposal and see how. Well, according to Richard's message one can extend only via upper-case letters. He tells me he also raised this in the meeting. > I don't see whether it matters if the > POSIX standard allows extensions of the required sort or not. We > aren't bound by the standard, but I believe it behooves us to follow > it to the extent that it is reasonable. If you're talking compatibility, such details matter. The more you say "compatibility, but differing in lots of details", the less convincing I find it. > I think calling it printf looks silly, > > I disagree, I think the name by itself makes almost no difference at all. It makes it look like you're trying to be like C. > won't impress C programmers, > > disagree; all the C/C++ programmers here think it's good. So C and C++ programmers matter after all, do they? > is a gratuitous change from existing Lisp practice, > > it's not gratuitous If the name doesn't matter, then changing the name is gratuitous. > is inconsistent > with naming and syntax conventions in the rest of EuLisp, > > (except scanf and the other proposed POSIX bindings) Scanf is being eliminated. > and is being proposed so late in the day that we won't have time to > deal with any unfortunate consequences before we publish the > definition. > > How many unfortunate consequences could it have? Maybe it gets in the way of calling the real printf. Maybe it rules out too many extensions. Maybe POSIX-binding makes us too OS dependent. BTW, I seem to recall that C requires n-arg functions to have at least one required arg. This is the kind of thing that makes me glad Lisp is further away from the machine and OS. C has been looking less and less attractive to me of late. Perhaps this has influenced what I've said. > If you trust our experience > with Talk, I can assure you that there are no hidden problems. That seems reasonable. > > > and promote integration > > > and output consistency between mixed Lisp/C applications. > > > > So would 1000 other things which we're not going to do. Picking > > this one thing seems completely off the wall to me. Especially now. > > We had years in which to do this, if it was so important. > > > > It's not just one thing. If there are other things aside from those > > proposed which would help mixed language applications, I would be > > interested in hearing about them. I hope other EuLispers would too. > > I'd be interested in hearing about them, and if they form a coherent > package in which printf makes sense, that will be a strong point in > favor of printf. > > You're the one who said there are 1000 other things we could do. So > let's hear about some of them. I already proposed a certain number > which Julian has kindly made ftable. Well, I think many of the 1000 are silly too, since they're just aiming at syntactic similarity, or undesirable for other reasons. However, if the aim is to make it easier to work with C, then I think we ought to take some time to think about it generally. I'd like to be able to mix Lisp and C procedures on a fairly equal basis, but even a fairly minimal but standard (in EuLisp) foreign interface might be worthwhile. > If someone were to propose a format that was > really, really better than printf, than I would certainly listen. Something that can handle Lisp data is already better than, and different from, printf. > Tell me why it's compatible with C in a suffucuently useful sense. > Common Lisp I/O is compatible with C to the extent that it can be > implemented in C and hence can be regarded as an extension. > > Here's how: [parallel Lisp / C code omitted] > The second Lisp form has no objective advantages over the first. The > first is likely to be understood and appreciated by 95% of > Unix/Windows programmers whatever their background. The second is > understood immediately only by experienced Lispers, who in any case > also understand the first form because they've all programmed in C by > now. You've shown a case (%d vs ~d) that works well. %s works less well, as does %X (Is it supposaed to be obvioius what this means? How about %e vs %E?) How do I output a Lisp object that's not handled by C? How do I control whether the output is readable by READ? There are many important cases that aren't resolved by analogy with C's printf. In any case this is a very small part of what programmers have to do when moving between Lisp and C. > I have used a wide range of programming languages, and none of them > except C use printf. That it's in a lower-level standard is beside > the point. Languages should be independent of operating systems > and the like. Now, if you want to propose that we have all POSIX > calls in a library, maybe that makes sense. > > The language *is* independent of OS's and the like. However, we're > talking about the I/O library, which is necessarily tied to some > notion of its operating environment. It just so happens that POSIX > provides such a notion which is both familiar to most programmers and > quite portable across the vast majority of platforms on which EuLisp > will be used. I'm often annoyed by OS dependencies in C I/O and am repeatedly thankful that Lisp is at a sufficiently higher level that similar problems don't occur (very often). > I'm sorry, but if you want C and C++ programmers to use EuLisp > for anything at all you're going to have to compete with the > languages they would use otherwise, namely C and C++. Removing > micro-irritants is not an effective way to do this. > > I completely disagree. It is possible to present Lisp as a complement > to C/C++; in fact, this seems to be by far the most successful way to > present it. It is unnecessary to compete; both languages have a > place. By "compete with" I don't mean "totally replace". However, people do all kinds of things in C (and C++). Things I would do in Lisp. Companies offer products in or for C/C++ that look more natural to me as Lisp products. Indeed, C and C++ programmers get by pretty well without Lisp. To a large extent, getting them to use Lisp for something involves getting them to not use C or C++. The only alternative is to get them to do in Lisp things they wouldn't do at all otherwise. I find people willing to do so much in C and C++ that I think the scope for this is small. > Removing any micro-irritant for which we don't provide an > obvious better alternative also helps, just a little in each case, but > still some. But sometimes it makes things worse for other people or looks silly or inconsistent with other things. > So I repeat: Why not? I have presented a detailed > argument along with a concrete, implemented proposal. It has pleased > both our programmers and the clients we've shown it to. What argument > is there for holding on to the status quo? See above. -- jeff * write-ups from Harley :From: Harley Davis Date: Mon, 6 Dec 93 14:31:57 GMT :From: Jeff Dalton > the standard it provides for printf is a standard for printing C data > types. BTW, do we have null-terminated strings in EuLisp? I think it would be a good idea. Also a way to test for the null char. -- jeff I agree. Of course, we should be careful to also allow the length to be explicitly coded in the string. -- Harley * printf :From: Harley Davis > Therefore, when we want to provide a service in EuLisp which has an > analogue in POSIX, we should provide a binding to the equivalent POSIX > functions. It looks like you're not providing a binding but rather a different (albeit similar) function. In the mysterious world of inter-language standards, this proposal certainly counts as a binding. If you doubt it, check out the CORBA spec and what they count as a language binding. (For example, the differences between the C/C++/SmallTalk bindings.) Other references are also available. Basically, "binding" is a pretty loose word. > In addition, because Lisp programmers expect better error > handling and a simpler interface, we should add in error handling and > take advantage of existing EuLisp types when providing such a binding. > As a concrete example of this reasoning, I propose replacing the > existing EuLisp format (which in any case is basically printf with > renamed directives) with the POSIX printf, plus various improvements. > Additionally, stream and file operations can be based on their POSIX > equivalents. In the case of files, this binding is fairly > straightforward; in the case of streams, it is more complicated > because FILE*'s and fd's are insufficient for a number of reasons (not > least is that they don't support READ/PRINT very well). However, it > is at least possible to have stream operations which are explicitly > buffered in a way compatible with FILE*'s, and in fact we can provide > a reasonable level of genericity in streams by defining a generic > protocol over this buffering. A demonstration specification and > implementation is provided by Ilog Talk, which has taken this > approach. If I were using C and Eulisp together, I'd want to know several things. Can I pass EuLisp strings and streams directly to C functions? Do Eulisp std{in,out} and C std{in,out} stay in step? Do they share buffers? Since there is no foreign language interface in EuLisp, it's pretty hard to specify this, no? In Talk, you can pass strings to C (they're null terminated direct char * pointers), and streams translated to fd's (was FILE* but it was too limiting). The stdxxx in Talk are initialized from C, but then go their own way. There is no buffer sharing (although we did want to, it turned out that the FILE* buffers aren't sufficiently portably controllable.) Now it looks to me like you're diverging from the POSIX routines in some significant ways, thought it's not clear exactly what they are. But once you diverge, having the same names is a somewhat mixed blessing. If you want to go with the sales argument, you can say that EuLisp has a POSIX binding with improvements, so C programmers using EuLisp have both a familiar set of functions and a more comfortable environment to use them. > and therefore make life easier for those programmers which have > chosen to use it. It doesn't make life easier for me. There's a class of programmers, which doesn't include programmers like me, that's going to get this easier life. If this isn't designed to attract them, then I'm baffled as to what aim it has; and I don't see the point in debating whether it's exactly C and C++ programmers or some similar group. Nonetheless, printf will be more familiar to C programmers than to anyone else. I think I explicitly mentioned that this would help a majority of programmers. Personally, given the choice between helping the minority of Lisp programmers vs. the vast majority of C/C++ programmers, I prefer the latter. This necessarily means that the minority is slightly disgruntled. I would have to say, too bad, Jeff, but I don't really think you would suffer very much, if at all. Well, I often have to use C together with Lisp, and changing the format conventions makes me less inclined to use EuLisp. But perhaps I don't use C enough. Have you looked at the EuLisp format conventions? They're not compatible with anything else. Why does a ~ please you more than a %? > I really do believe that printf is > better than some mutant format which is in any case based on printf. Was it based on printf? Please read section A.10.3 of EuLisp 0.99, page 54, remarks for the function format: "These formatting directives are intentionally compatible with the facilities defined for the function fprintf in ISO/EIC 9899 : 1990." So EuLisp's format is already printf hiding behind a thin veneer of pseudo-Lisp compatibility. > I also believe that basing functionality on a standard when we can't > do much better is also good for the language since it makes it simpler > to implement and specify and easier to learn. But how much of existing printf can we use? We'll want to print many things that aren't handled by printf, including numbers of sorts not known in C. The new directive %A prints all Lisp objects as if printed by prin. This will obviously handle numbers too. > won't impress C programmers, > > disagree; all the C/C++ programmers here think it's good. So C and C++ programmers matter after all, do they? Of course, that's the whole point. I differed with the idea that this proposal is primarily meant to attract them rather than retain them. We want to please C/C++ programmers because they represent 90% of Unix/DOS/Windows programmers. > is a gratuitous change from existing Lisp practice, > > it's not gratuitous If the name doesn't matter, then changing the name is gratuitous. Exactly. Why change from the standard printf to bizarre, obscure format? > How many unfortunate consequences could it have? Maybe it gets in the way of calling the real printf. Maybe it rules out too many extensions. Maybe POSIX-binding makes us too OS dependent. Unices are almost all POSIX compliant. Windows NT has a POSIX compliant module (and better ones available commercially.) VMS is now POSIX compliant. Even DOS has a goodly number of the POSIX functions. What OS are you worried about? Perhaps Mac? What is the situation there? What about the AS/400? Should we care about that? BTW, I seem to recall that C requires n-arg functions to have at least one required arg. This is the kind of thing that makes me glad Lisp is further away from the machine and OS. C has been looking less and less attractive to me of late. Perhaps this has influenced what I've said. C has never looked particularly attractive to me. In fact, I think it sucks for almost every program I want to write. Nevertheless, that is what people use and for certain areas we don't have interfaces that are much better. Well, I think many of the 1000 are silly too, since they're just aiming at syntactic similarity, or undesirable for other reasons. However, if the aim is to make it easier to work with C, then I think we ought to take some time to think about it generally. I'd like to be able to mix Lisp and C procedures on a fairly equal basis, but even a fairly minimal but standard (in EuLisp) foreign interface might be worthwhile. I would be happy to propose a minimal foreign function interface (for C anyway) if people are interested. > If someone were to propose a format that was > really, really better than printf, than I would certainly listen. Something that can handle Lisp data is already better than, and different from, printf. But upwardly compatible with it. > Tell me why it's compatible with C in a suffucuently useful sense. > Common Lisp I/O is compatible with C to the extent that it can be > implemented in C and hence can be regarded as an extension. > > Here's how: [parallel Lisp / C code omitted] > The second Lisp form has no objective advantages over the first. The > first is likely to be understood and appreciated by 95% of > Unix/Windows programmers whatever their background. The second is > understood immediately only by experienced Lispers, who in any case > also understand the first form because they've all programmed in C by > now. You've shown a case (%d vs ~d) that works well. %s works less well, Converts its argument to a string. as does %X (Is it supposaed to be obvioius what this means? What's the problem? How about %e vs %E?) Why is %e vs. %E mysterious? How do I output a Lisp object that's not handled by C? %A. How do I control whether the output is readable by READ? %A. There are many important cases that aren't resolved by analogy with C's printf. But they're in the proposal. In any case this is a very small part of what programmers have to do when moving between Lisp and C. Naturally. Any implementation needs a foreign function interface. As I said, if people are willing to consider such a thing for the language (and I think the public would applaud a Lisp with a standard, even minimal, FFI), I would be happy to start the ball rolling with a proposal. > I have used a wide range of programming languages, and none of them > except C use printf. That it's in a lower-level standard is beside > the point. Languages should be independent of operating systems > and the like. Now, if you want to propose that we have all POSIX > calls in a library, maybe that makes sense. > > The language *is* independent of OS's and the like. However, we're > talking about the I/O library, which is necessarily tied to some > notion of its operating environment. It just so happens that POSIX > provides such a notion which is both familiar to most programmers and > quite portable across the vast majority of platforms on which EuLisp > will be used. I'm often annoyed by OS dependencies in C I/O and am repeatedly thankful that Lisp is at a sufficiently higher level that similar problems don't occur (very often). What sorts of OS dependencies do you encounter in C I/O these days? Is it because you're using non-POSIX functions or options? Do you program for DOS or Windows 3? The only alternative is to get them to do in Lisp things they wouldn't do at all otherwise. I find people willing to do so much in C and C++ that I think the scope for this is small. Then you think Lisp doesn't have much future? -- Harley * The rest of the write-ups from Harley :From: Jeff Dalton Harley -- I had a closer look at your big document last night. Printf is such a small issue that I'm surprised it's generated such long messages. There are a number of more important issues in there; it will be interesting to see what happens to them. After looking again at the whole set of proposals, I'm not longer so inclined to oppose printf. However, I do have several other concerns, some minor, others not. * The buffer operations rely on default handlers. Eulisp doesn't have them (unless they sneaked in when I wasn't looking). Adding default handlers is a significant non-local change. I don't think we should make it without thinking very carefully about the consequences. * The rules for defliteral with respect to modules are far too restrictive. Expressions containing only constants (including literals defined by defliteral) should be evaluable at compile time. This is pretty standard practice in compilers and doesn't bring in the deeper environment issues that have been tied to using *macros* in the module in which they're defined. Applying the same restriction to literals creates unnecessary trouble for users and makes the language look bad. * The #f syntax is taken for filename objects. I would prefer that it be available for people who want to follow Scheme conventions. * The order of args for merge-filenames is somewhat peculiar. I find it easier to handle such functions if one arg (the 2nd in CL) is treated as supplying defaults. * There's no fdopen (which I would find useful). Most of the POSIX-derived routines are FILE * routines, but some are (normally) for fds; so I'm a bit puzzled about what the rules are. * There are more POSIX-related routines in here than in the C standard. If we take the POSIX route, I think we should identify a core subset and relegate the rest to a POSIX library, if we want them. This would be level 2 and hence not specified at this time. -- jeff * converter & inheritance :From: Ingo Mohr During implementation of collection functions a question occurs about the intention of converters: [1] Should converters be bound exactly to the class given in (defgeneric (converter class) ...) or [2] should subclasses inherit converters of its superclasses ? Looking at the definition of convert&co (0.99) the answer must be [1]. The answer [2] is implied indirectly by the (converter ): If this converter is available only for I cannot use it in conjunction with class-of as in (convert y (class-of x)) because as an abstract class should never be the result of class-of. On the other hand it makes no sense to replace (converter ) by (converter ) in the definition because it is useless for conversion of zero-length sequences. The same problem occurs if
will be an abstract class in the future. It would be fine if some of you can give me an answer. Ingo * converter & inheritance :From: Harley Davis In article Ingo Mohr writes: During implementation of collection functions a question occurs about the intention of converters: [1] Should converters be bound exactly to the class given in (defgeneric (converter class) ...) or [2] should subclasses inherit converters of its superclasses ? I remember that we decided at one meeting that the answer is definitely [2] for exactly the reasons you brought up. I would hope the definition respects this intention, but it is never guaranteed. -- Harley * The rest of the write-ups from Harley :From: Harley Davis In article Jeff Dalton writes: Harley -- I had a closer look at your big document last night. Printf is such a small issue that I'm surprised it's generated such long messages. There are a number of more important issues in there; it will be interesting to see what happens to them. Well, I'm glad we've moved from confrontation mode to technical discussion mode! Your points are all valid concerns. Let's see what there is to say... * The buffer operations rely on default handlers. Eulisp doesn't have them (unless they sneaked in when I wasn't looking). Adding default handlers is a significant non-local change. I don't think we should make it without thinking very carefully about the consequences. This issue has never come up in EuLisp because all of the conditions defined up to this point are errors, for which we have no need (or desire) to specify the default handling. These conditions are the first which do require such a notion. In Talk we have a distinguished handler function named default-handler which is the called when no others are applicable (or when a handler method falls through.) Not only does the existence of this handler cause no particular problems, but it is quite necessary. Even without specifying the name of the default handler, I don't see how specifying default behavior for a condition can cause special problems. Introducing a special named handler also causes no problems because of the rule which says that the user cannot define methods for specified generic functions specializing only on specified classes. * The rules for defliteral with respect to modules are far too restrictive. Expressions containing only constants (including literals defined by defliteral) should be evaluable at compile time. This is pretty standard practice in compilers and doesn't bring in the deeper environment issues that have been tied to using *macros* in the module in which they're defined. Applying the same restriction to literals creates unnecessary trouble for users and makes the language look bad. What you say is entirely true, and we could adopt that approach. In Talk we chose not to simply because it complicates the explanation of module processing, and we found that even with our simpleminded module system people often had some trouble understanding what was going on. We could also adopt another version of defliteral which only accepts constant expressions (as outlined above), and then the problem also doesn't come up. However, I do from time to time write defliterals with non-constant values where I don't mind evaluating the value in the compilation environment, so such a restriction would be occasionally bothersome. Some people would also be a little surprised if (defliteral x (+ 1 1)) was not a legal literal, since it looks like a constant expression. For these cases, you would have to explain why the literal needs to be in a syntax module. Finally, at least in our experience it is not too troublesome to put defliterals apart simply because most systems already have one or more compile-time macro modules so no new module is needed just for literals. * The #f syntax is taken for filename objects. I would prefer that it be available for people who want to follow Scheme conventions. I have no opinion on this. We just chose #f for obvious reasons and because we don't give a hoot about Scheme compatibility. * The order of args for merge-filenames is somewhat peculiar. I find it easier to handle such functions if one arg (the 2nd in CL) is treated as supplying defaults. Perhaps if merge-filenames were called filename+ or something like that it would be clearer. It is not intended to be equivalent to CL's merge-pathnames. It is really a concatenation whose intended purpose is to catenate a directory to a basename and optionally an extension: (merge-filenames #f"/tmp" #f"some-file" ".e") -> #f"/tmp/some-file.e" This was seen to be the most frequent use of the various pathname merging functions from Le-Lisp, and so we made this case as easy as possible. * There's no fdopen (which I would find useful). Most of the POSIX-derived routines are FILE * routines, but some are (normally) for fds; so I'm a bit puzzled about what the rules are. OK, this is completely true. As I mentioned briefly in my last message, we have recently abandoned the FILE* metaphor for a purely fd-based metaphor. (This, since we wrote the doc.) So we replaced fopen with open, etc. (Problem: The mode specification with open is really annoying compared with fopen.) The buffering is then a purely Lisp-based notion, only conceptually derived from FILE*'s. * There are more POSIX-related routines in here than in the C standard. If we take the POSIX route, I think we should identify a core subset and relegate the rest to a POSIX library, if we want them. This would be level 2 and hence not specified at this time. I have no problem with this. Would you like to take a crack at specifying the core functions? -- Harley * Harley wirte-ups: defliteral :From: Jeff Dalton > In article Jeff Dalton writes: > > Harley -- I had a closer look at your big document last night. > Printf is such a small issue that I'm surprised it's generated > such long messages. There are a number of more important issues > in there; it will be interesting to see what happens to them. > > Well, I'm glad we've moved from confrontation mode to technical > discussion mode! Ah, but you haven't seen my next message yet. > Your points are all valid concerns. Let's see what there is to say... > * The rules for defliteral with respect to modules are far too > restrictive. Expressions containing only constants (including > literals defined by defliteral) should be evaluable at compile > time. [...] > What you say is entirely true, and we could adopt that approach. Let's then. But is it clear what "that approach" is? (See below.) > In Talk we chose not to simply because it complicates the explanation of > module processing, and we found that even with our simpleminded module > system people often had some trouble understanding what was going on. But the workaround (using a defglobal instead of or paired with a defliteral) is tricky too. [I'm referring here to the part of the paper that suggests that when you want something like this: (defliteral %pi% 3.14...) (defliteral %pi%/2 (/ %pi% 2)) which is illegal (!), you can get around this by writing: (defglobal :pi 3.14 ...) (defliteral %pi% :pi) (defliteral %pi%/2 (/ :pi 2)) ] > We could also adopt another version of defliteral which only accepts > constant expressions (as outlined above), and then the problem also > doesn't come up. However, I do from time to time write defliterals > with non-constant values where I don't mind evaluating the value in > the compilation environment, so such a restriction would be > occasionally bothersome. > > Some people would also be a little surprised if (defliteral x (+ 1 1)) > was not a legal literal, since it looks like a constant expression. > For these cases, you would have to explain why the literal needs to be > in a syntax module. (+ 1 1) is an expression containing only constants in the sense that I had in mind. So is (/ %pi% 2) when %pi% is defined by defliteral. > Finally, at least in our experience it is not too troublesome to put > defliterals apart simply because most systems already have one or more > compile-time macro modules so no new module is needed just for literals. That's fine when the expressions don't refer to other literals. Needing a module for each level makes this whole approach to modules look wrong. -- jeff * Harley write-ups: etc :From: Jeff Dalton > * The #f syntax is taken for filename objects. I would prefer > that it be available for people who want to follow Scheme > conventions. > > I have no opinion on this. We just chose #f for obvious reasons and > because we don't give a hoot about Scheme compatibility. If we called them "pathnames", we could use #p, I suppose. KCL uses #"...". But do we have readtables in EuLisp these days? This is a good case for testing whether our system is sufficiently flexible, because someone might want Scheme syntax in some modules but not in others. > * The order of args for merge-filenames is somewhat peculiar. > I find it easier to handle such functions if one arg (the 2nd > in CL) is treated as supplying defaults. > > Perhaps if merge-filenames were called filename+ or something like > that it would be clearer. It is not intended to be equivalent to CL's > merge-pathnames. It is really a concatenation whose intended purpose > is to catenate a directory to a basename and optionally an extension: > > (merge-filenames #f"/tmp" #f"some-file" ".e") -> #f"/tmp/some-file.e" > > This was seen to be the most frequent use of the various pathname > merging functions from Le-Lisp, and so we made this case as easy as > possible. I'm used to the default idea. It doesn't have to be compatible with CL's merge-pathnames. Maybe this is OK too, but if so I think the documentation could do more to suggest a "model" for understanding what roles the arguments play. The defaulting idea is a model I find helpful. Anyway, what do other people think? > * There's no fdopen (which I would find useful). Most of the > POSIX-derived routines are FILE * routines, but some are > (normally) for fds; so I'm a bit puzzled about what the rules > are. > > OK, this is completely true. As I mentioned briefly in my last > message, we have recently abandoned the FILE* metaphor for a purely > fd-based metaphor. (This, since we wrote the doc.) So we replaced > fopen with open, etc. (Problem: The mode specification with open is > really annoying compared with fopen.) The buffering is then a purely > Lisp-based notion, only conceptually derived from FILE*'s. Humm. I'm not sure what all the implications of this are. If I can use printf in both Lisp and C, I'd expect the output to go to the same place and be interleaved in the obvious way. Indeed, it's a pain when using C with some Lisps that output to the same destination is independent. > * There are more POSIX-related routines in here than in the C > standard. If we take the POSIX route, I think we should identify a > core subset and relegate the rest to a POSIX library, if we want > them. This would be level 2 and hence not specified at this time. > > I have no problem with this. Would you like to take a crack at > specifying the core functions? Can someone say what C specifies? (Richard?) -- jeff * Harley write-ups: default handlers :From: Jeff Dalton > Your points are all valid concerns. Let's see what there is to say... > > * The buffer operations rely on default handlers. Eulisp doesn't > have them (unless they sneaked in when I wasn't looking). Adding > default handlers is a significant non-local change. I don't think > we should make it without thinking very carefully about the > consequences. > > This issue has never come up in EuLisp because all of the conditions > defined up to this point are errors, for which we have no need (or > desire) to specify the default handling. On the contrary, the issue has come up several time and I have always (successfully) opposed it. However, my main concern is not default handlers per se but rather an idea that default handlers encourage, namely the idea that certain types of conditions are -- by virtue of being that type -- continuable to not. I think this should be a property of how the condition is signalled, not of the type. Programmers should be free to signal the most appropriate condition without being discouraged by the prospect of writing a recovery handler for it. This is one of the considerations that went into the design of the CL condition system, and I think it's right. That is, I think we should make signalling independent of any provision for recovery. Default handlers encourage a different way of thinking in which the signaller has to know what the default handler does and be prepared to deal with it. Default handlers also, of course, have the usual problems of such global arrangements, that when different parts of a system have different requirements it's difficult for them to fit together. > These conditions are the first which do require such a notion. The requirement is at least fairly subtle. Why can't an appropriate generic be called directly? > In Talk we have a distinguished > handler function named default-handler which is the called when no > others are applicable (or when a handler method falls through.) Not > only does the existence of this handler cause no particular problems, > but it is quite necessary. In Eulisp, we were careful to define signalling is such a way that it didn't require a default handler to give the default "no handler" behavior; instead a no-handler function (perhaps a nominal function rather than one actually in the language) was called when no handler handled the condition. (This includes the case where no handler exists, of course.) Maybe this isn't obvious now, but it was explicitly an aim of this definition to avoid bringing default handlers into the picture. -- jeff * Harley write-ups: etc :From: Richard Tobin > I'm used to the default idea. > Anyway, what do other people think? Being used to the C/Unix way of doing things, I find Harley's version clearer (expecially if it has a better name). BTW, does the filename stuff allow for HTTP URLs? These look like http://host[:port]/path/path/... > Humm. I'm not sure what all the implications of this are. > If I can use printf in both Lisp and C, I'd expect the output to > go to the same place and be interleaved in the obvious way. > Indeed, it's a pain when using C with some Lisps that output > to the same destination is independent. This is a problem. It could of course be overcome by an implementation providing its own replacement for the C stdio library that used the same buffers as Lisp (and there are several free versions available), but this is not ideal. If C and Lisp use the same file descriptor but different buffers, they have to be sure to call fflush() sufficiently often. > > I have no problem with this. Would you like to take a crack at > > specifying the core functions? I agree that it needs to be cut down. It would be bizarre if EuLisp specified more of this than C does! > Can someone say what C specifies? (Richard?) Ok (grouping corresponds to that in the C standard): remove(), rename(), tmpfile(), tmpnam() fclose(), fflush(), fopen(), freopen(), setbuf(), setvbuf() fprintf(), fscanf(), printf(), scanf(), sprintf(), sscanf(), vfprintf(), vprintf(), vsprintf() fgetc(), fgets(), fputc(), fputs(), getc(), getchar(), gets(), putc(), putchar(), puts(), ungetc(), fread(), fwrite() fgetpos(), fseek(), fsetpos(), ftell(), rewind() clearerr(), feof(), ferror(), perror() The v*printf() functions are irrelevant (we have rest lists), as are the f* variants of getc() etc (which are just non-macro variants). In addition, we probably *do* want to specify the fillbuf/flushbuf functions (I don't have a POSIX document, but I assume it doesn't). Whether we want set[v]buf depends on how we do that. We certainly don't want tcgetattr() and other such POSIXisms. -- Richard * printf :From: Jeff Dalton > It looks like you're not providing a binding but rather a different > (albeit similar) function. > > In the mysterious world of inter-language standards, this proposal > certainly counts as a binding. If you doubt it, check out the CORBA > spec and what they count as a language binding. A binding to X ought to be something that refers to X, not something that has the same name a something that refers to X but which actually refers to X' or maybe Z. Very strange. > If I were using C and Eulisp together, I'd want to know several > things. Can I pass EuLisp strings and streams directly to C > functions? > Do Eulisp std{in,out} and C std{in,out} stay in step? > Do they share buffers? > > Since there is no foreign language interface in EuLisp, it's pretty > hard to specify this, no? Sure. But if we're going down this road of making it easier for EuLisp and C to work together, this is the sort of thing that has to be addressed. My objection to the printf proposal was, in a sense, that it was taking a few steps in that direction without really taking it seriously. > (... it turned out that the FILE* buffers > aren't sufficiently portably controllable.) (This is one reason why C I/O is full of annoying special cases and references to std io lib internals that differ from implementation to, lose, lose.) > Now it looks to me like you're diverging from the POSIX routines > in some significant ways, thought it's not clear exactly what they > are. But once you diverge, having the same names is a somewhat > mixed blessing. > > If you want to go with the sales argument, you can say that EuLisp has > a POSIX binding with improvements, so C programmers using EuLisp have > both a familiar set of functions and a more comfortable environment to > use them. That makes sense. We can say: see how nice even printf would be if it wasn't embedded in a language that can just barely deal with n-arg functions, or something along those lines. > > I really do believe that printf is > > better than some mutant format which is in any case based on printf. > > Was it based on printf? > > Please read section A.10.3 of EuLisp 0.99, page 54, remarks for the > function format: > > "These formatting directives are intentionally compatible with the > facilities defined for the function fprintf in ISO/EIC 9899 : 1990." I think it's pretty clear that the EuLisp format function is based on MacLisp / CL format, though no doubt we've also held C in mind. (Hence the \n.) Indeed, this is so clear that I thought you must be saying the original format was based on printf. > The new directive %A prints all Lisp objects as if printed by prin. > This will obviously handle numbers too. But will the C directives handle these things? That was my point. In addition, I need two Lisp data directives: one the prints escape chars, quote marks around strings, etc; and one that doesn't. > So C and C++ programmers matter after all, do they? > > Of course, that's the whole point. I differed with the idea that this > proposal is primarily meant to attract them rather than retain them. Are you serious?! *That*'s what it comes down to? The difference between attract and retain? (As if you might retain while repelling, say.) Maybe Ilog has C++ programmer it wants to retain, but EuLisp doesn't. > If the name doesn't matter, then changing the name is gratuitous. > > Exactly. Why change from the standard printf to bizarre, obscure format? Why change EuLisp? That's the gratuitious change I'm talking about. > > How many unfortunate consequences could it have? > > Maybe it gets in the way of calling the real printf. Maybe it > rules out too many extensions. Maybe POSIX-binding makes us too > OS dependent. > > Unices are almost all POSIX compliant. > Windows NT has a POSIX compliant module (and better ones available > commercially.) > VMS is now POSIX compliant. > Even DOS has a goodly number of the POSIX functions. Is this suppose to show it doesn't get in the way of calling the real printf or that it doesn't rule out too many extentsions? Or even that it doesn't bring us too close to the OS? It's not just a matter of portability, it's also having losing OS features intrude into the language. > I would be happy to propose a minimal foreign function interface (for > C anyway) if people are interested. I am, and there are some people on the net who would be too, because they've been complaining about the lack of exactly that. From time to time, I try to point such people at EuLisp, and I'd like to be able to point them more effectively. Moreover I'm all the time needing to call C things myself, although I call things like waitpid which don't have the most accommodating interface. > But upwardly compatible with it. Depends. Lower case %a isevidently not an extension allowed by the standard. > You've shown a case (%d vs ~d) that works well. %s works less well, > > Converts its argument to a string. Etc. Perhaps you misunderstood me. %d and ~d makes it look like a EuLisp printf is a good idea, instantly understandable to both Lisp and C programmers. But that's the best example. There are others that don't make things look so nice. %s and ~s do different things, for instance. %X has a meaning that Lisp programmers won't expect, thought they might get a clue from %e vs %E. And so on. > There are > many important cases that aren't resolved by analogy with C's printf. > > But they're in the proposal. Beside the point. Programmers won't be able to say "Humm. By analogy with C printf, it probably works like this." > In any case this is a very small part of what programmers have to do > when moving between Lisp and C. > > Naturally. Any implementation needs a foreign function interface. As > I said, if people are willing to consider such a thing for the > language (and I think the public would applaud a Lisp with a standard, > even minimal, FFI), I would be happy to start the ball rolling with a > proposal. I'd like to see a proposal with the POSIX stuff simplified and an FFI added. > I'm often annoyed by OS dependencies in C I/O and am repeatedly thankful > that Lisp is at a sufficiently higher level that similar problems > don't occur (very often). > > What sorts of OS dependencies do you encounter in C I/O these days? Code that assumes the internals of a particular stdio lib implementation. Code that assumes Sys V system calls. Code that > Is it because you're using non-POSIX functions or options? It's chiefly because other people can't or didn't write portable code. > Do you program for DOS or Windows 3? I use only Unix, and stick to BSD when I can. The whole Sys V excursion was a big mistake that will give the world to Microsoft. Making the same kind of mistake w.r.t. C++ will be a disaster too. (The mistake is making something have a major impact by thinking it will have a major impact and then acting accordingly, thus bringing about a major impact that could otherwise have been avoided. Meanwhile, some other folk, not so distracted, do something else that wins.) -- jeff * Harley write-ups: default handlers :From: Harley Davis However, my main concern is not default handlers per se but rather an idea that default handlers encourage, namely the idea that certain types of conditions are -- by virtue of being that type -- continuable to not. I think this should be a property of how the condition is signalled, not of the type. Programmers should be free to signal the most appropriate condition without being discouraged by the prospect of writing a recovery handler for it. This is one of the considerations that went into the design of the CL condition system, and I think it's right. I think it's not right for all conditions. For instance, the / conditions are just inherently continuable. (Or, if you really insist, they are always signaled continuably since the system signals them that way.) This case is somewhat different from your point since programmers are never supposed to signal the conditions themselves. That is, I think we should make signalling independent of any provision for recovery. Default handlers encourage a different way of thinking in which the signaller has to know what the default handler does and be prepared to deal with it. What's wrong with this way of thinking for these conditions? Default handlers also, of course, have the usual problems of such global arrangements, that when different parts of a system have different requirements it's difficult for them to fit together. If there is one default handler which is a unique object I don't see how this problem arises. > These conditions are the first which do require such a notion. The requirement is at least fairly subtle. Why can't an appropriate generic be called directly? We wanted to support two ways to extend streams: 1. When designing a new class of streams, by writing a method on the gf fill-buffer (or flush-buffer) you describe its interaction with the rest of the system. Example: the vector-string-stream example in the proposal. 2. You can dynamically control all streams by wrapping a handler which adds or modifies the behavior of these conditions. Example: the line counting example in the proposal. Another example is the Le-Lisp printy-printer which uses a clever hack involving the condition to know when to put an expression on one line and when to split it up. The Le-Lisp experience showed that both dimensions of extensions are desirable. We could, of course, just say "tough" to the second type of extension and get rid of the conditions. I think this would be sad, especially since it's quite difficult to subclass file streams. > In Talk we have a distinguished > handler function named default-handler which is the called when no > others are applicable (or when a handler method falls through.) Not > only does the existence of this handler cause no particular problems, > but it is quite necessary. In Eulisp, we were careful to define signalling is such a way that it didn't require a default handler to give the default "no handler" behavior; instead a no-handler function (perhaps a nominal function rather than one actually in the language) was called when no handler handled the condition. (This includes the case where no handler exists, of course.) Maybe this isn't obvious now, but it was explicitly an aim of this definition to avoid bringing default handlers into the picture. Well, I can only say that we have them, like them, and use them all the time. What exactly is the argument for the idea that no condition class is ever inherently continuable? (In Talk, continuability is always a question of the protocol between the signaler and the handler for a given condition class. It is not an explicit notion.) -- Harley * Harley write-ups: default handlers :From: Jeff Dalton > The / conditions are just inherently > continuable. (Or, if you really insist, they are always signaled > continuably since the system signals them that way.) The names suggest requests rather than conditions. sounds more like a condition, and I don't see why it has to be continuable. The line counting and dimilar examples came up before. If they require inherently continuable conditions, I'd rather lose the examples. I think this is an issue we considered several times before and we stayed with the approach we have now. I don't think we should overturn all that as a side effect of adopting a stream proposal. I also don't want to have to spend a lot of time defending those decisions. Is it as essentail part of the stream proposal? > That is, I think we should make signalling independent of any > provision for recovery. Default handlers encourage a different > way of thinking in which the signaller has to know what the > default handler does and be prepared to deal with it. > > What's wrong with this way of thinking for these conditions? Why should I have to provide a way to continue? If you want to define a protocol in which a way to continue is normally provided, that may be ok. But if it becomes a property of the type, then pretty soon every case where we can think of a useful way to continue will start requiring this. On bad consequence of this, indicated in my earlier message, is that programmers will signal a less appropriate condition just to avoid dealing with continuing. I also think it's conceptually cleaner if we separate signalling from providing ways to recover. Our system already departs from this by having a continue arg to signal (at least that's what I remember). I can live with that, but I don't want to go further. > Default handlers also, of course, have the usual problems of such global > arrangements, that when different parts of a system have different > requirements it's difficult for them to fit together. > > If there is one default handler which is a unique object I don't see > how this problem arises. What do you mean? Different parts may want different defaults. > > These conditions are the first which do require such a notion. > > The requirement is at least fairly subtle. Why can't an > appropriate generic be called directly? > > We wanted to support two ways to extend streams: > > 1. When designing a new class of streams, by writing a method on the > gf fill-buffer (or flush-buffer) you describe its interaction with > the rest of the system. Example: the vector-string-stream example > in the proposal. > > 2. You can dynamically control all streams by wrapping a handler which > adds or modifies the behavior of these conditions. Example: the > line counting example in the proposal. Another example is the > Le-Lisp printy-printer which uses a clever hack involving the > condition to know when to put an expression on one > line and when to split it up. Since 2 doesn't seem to be an essential part of the stream proposal, I'd like to drop it. I think it brings in too many difficult issues too late in the day. > The Le-Lisp experience showed that both dimensions of extensions are > desirable. We could, of course, just say "tough" to the second type > of extension and get rid of the conditions. I think this would be > sad, especially since it's quite difficult to subclass file streams. I would think that one of our aims would be to provide file streams that were reasonably easy to subclass. > Maybe this isn't obvious now, but it was explicitly an aim of this > definition to avoid bringing default handlers into the picture. > > Well, I can only say that we have them, like them, and use them all > the time. > What exactly is the argument for the idea that no condition > class is ever inherently continuable? I haven't given one. My argument is more pragmatic and aesthetic than essentialist. However, if I signal a condition, I'm saying that something has occurred. Why should this ever require that I also provide a way to continue? It looks like a separate decision to me. > (In Talk, continuability is > always a question of the protocol between the signaler and the handler > for a given condition class. It is not an explicit notion.) This is better, and I might not mind it if it were sufficiently well confined. But I don't want us to start requiring this in all kinds of cases, and I think it has various problems. -- jd * write-ups from Harley :From: Jeff Dalton > > BTW, do we have null-terminated strings in EuLisp? I think it > > would be a good idea. Also a way to test for the null char. > > I agree. Of course, we should be careful to also allow the length to > > be explicitly coded in the string. > We shouldn't give up the rule that strings can contains any characters > whatsoever. Why not? Why is that more important than being able to give strings to C? > I think #\x0 must be excluded from the type > if we want to allow implementations that use null-terminated > strings. I don't care one way or the other. The "null char" can be a separate type. But it could be a character. Separate character and string-character classes would not be required. > Never specify that the set of characters has at least 256 elements. Is that the current limit? What are we going to do about international character sets? > Otherwise we end up with Common-Lisp's distinction between CHARACTER > and STRING-CHAR. STRING-CHAR has been removed. This was part of adapting to international character sets. -- jd * Harley wirte-ups: defliteral :From: Jeff Dalton > > What you say is entirely true, and we could adopt that approach. > > Let's then. But is it clear what "that approach" is? (See below.) > > We shouldn't do something just because we can. For example, the fact > that your idea is rather complex to explain is a consideration. I don't think my idea is even slightly complex to explain, especially compared to Talk's defglobal trick. I still don't understand why that works until I think for a minute or so. > But the workaround (using a defglobal instead of or paired with a > defliteral) is tricky too. > > But it follows an extremely clear model of module processing/execution. If anything, it discredits that model. > In other words, in Talk, in no sense is any > form ever evaluated during module preprocessing. In your proposal, > some forms are evaluated. We considered the simple model to be very > important. I think my model is simpler overall. Moreover, I think this rule about modules is distorting more and more of the language. That we can't even have a simple way of defining constants is going too far. > (+ 1 1) is an expression containing only constants in the sense that > I had in mind. So is (/ %pi% 2) when %pi% is defined by defliteral. > > Is '+' a literal? If so, what about user-defined functions? This > problem is definitely harder in EuLisp than in, say, C, because even > the core functions are supposed to be redefinable and treated as > normal bindings. Is this so? When did it happen? People used to be concerend about being able to analyze code. Are you now telling me that when the compiler looks at a module in which + appears it can't tell what + means? If so, then either we've broken the ability to analyze code or the analysis is incredibly wimpy. > > Finally, at least in our experience it is not too troublesome to put > > defliterals apart simply because most systems already have one or more > > compile-time macro modules so no new module is needed just for literals. > > That's fine when the expressions don't refer to other literals. > Needing a module for each level makes this whole approach to modules > look wrong. > > The defglobal solution suggested in the proposal gets around requiring > more than 1 level. That's its purpose. I know that's its purpose. But far from making this approach look good, it makes it look like it must be based on a mistake. -- jd * Harley write-ups: etc :From: Jeff Dalton > > (merge-filenames #f"/tmp" #f"some-file" ".e") -> #f"/tmp/some-file.e" > > > > This was seen to be the most frequent use of the various pathname > > merging functions from Le-Lisp, and so we made this case as easy as > > possible. > > I'm used to the default idea. It doesn't have to be compatible > with CL's merge-pathnames. Maybe this is OK too, but if so I > think the documentation could do more to suggest a "model" > for understanding what roles the arguments play. The defaulting > idea is a model I find helpful. > > I think it's complicated to understand for beginners, and really > confusing for people used to C (cf Richard's message.) What is it about C or Unix that makes it confusing? It's evidently something I never encountered, and Richard's message didn't say what it was either. BTW, when I first encountered merging (not in Lisp or Unix, though I can't rememeber for sure where it was), merging with defaults seemed very natural to me. I still don't have a simple model for whatever it is you're doing. > Humm. I'm not sure what all the implications of this are. > If I can use printf in both Lisp and C, I'd expect the output to > go to the same place and be interleaved in the obvious way. > Indeed, it's a pain when using C with some Lisps that output > to the same destination is independent. > > EuLisp printf needs to flush immediately for good interleaving. As > far as prin goes, the flushing is explicit (or implicit with print). > This is no worse than C++ streams, and people seem to live with that. My point is this: if we have printf, I want these other things to be true. If they aren't then printf starts being accompanied by pitfalls. I don't want a big pitfall list for EuLisp. (My Common Lisp one is only a few pages, but I've been forgetting to put things in.) -- jeff * Harley write-ups: etc :From: Harley Davis Date: Tue, 7 Dec 93 16:38:07 GMT :From: Jeff Dalton > * The #f syntax is taken for filename objects. I would prefer > that it be available for people who want to follow Scheme > conventions. > > I have no opinion on this. We just chose #f for obvious reasons and > because we don't give a hoot about Scheme compatibility. If we called them "pathnames", we could use #p, I suppose. Could work. We didn't do this to avoid confusion with other Lisp's pathnames. KCL uses #"...". I hate this idea. But do we have readtables in EuLisp these days? This is a good case for testing whether our system is sufficiently flexible, because someone might want Scheme syntax in some modules but not in others. We don't have readtables, and I think they're a bad idea because they make module processing even more complicated. Readtime vs. compile time vs. load/run time -- yuck. I think another consideration was to able to write the reader in lex/yacc without needing callbacks to Lisp. > (merge-filenames #f"/tmp" #f"some-file" ".e") -> #f"/tmp/some-file.e" > > This was seen to be the most frequent use of the various pathname > merging functions from Le-Lisp, and so we made this case as easy as > possible. I'm used to the default idea. It doesn't have to be compatible with CL's merge-pathnames. Maybe this is OK too, but if so I think the documentation could do more to suggest a "model" for understanding what roles the arguments play. The defaulting idea is a model I find helpful. I think it's complicated to understand for beginners, and really confusing for people used to C (cf Richard's message.) > * There's no fdopen (which I would find useful). Most of the > POSIX-derived routines are FILE * routines, but some are > (normally) for fds; so I'm a bit puzzled about what the rules > are. > > OK, this is completely true. As I mentioned briefly in my last > message, we have recently abandoned the FILE* metaphor for a purely > fd-based metaphor. (This, since we wrote the doc.) So we replaced > fopen with open, etc. (Problem: The mode specification with open is > really annoying compared with fopen.) The buffering is then a purely > Lisp-based notion, only conceptually derived from FILE*'s. Humm. I'm not sure what all the implications of this are. If I can use printf in both Lisp and C, I'd expect the output to go to the same place and be interleaved in the obvious way. Indeed, it's a pain when using C with some Lisps that output to the same destination is independent. EuLisp printf needs to flush immediately for good interleaving. As far as prin goes, the flushing is explicit (or implicit with print). This is no worse than C++ streams, and people seem to live with that. -- Harley * Harley wirte-ups: defliteral :From: Harley Davis > Your points are all valid concerns. Let's see what there is to say... > * The rules for defliteral with respect to modules are far too > restrictive. Expressions containing only constants (including > literals defined by defliteral) should be evaluable at compile > time. [...] > What you say is entirely true, and we could adopt that approach. Let's then. But is it clear what "that approach" is? (See below.) We shouldn't do something just because we can. For example, the fact that your idea is rather complex to explain is a consideration. > In Talk we chose not to simply because it complicates the explanation of > module processing, and we found that even with our simpleminded module > system people often had some trouble understanding what was going on. But the workaround (using a defglobal instead of or paired with a defliteral) is tricky too. But it follows an extremely clear model of module processing/execution. In other words, in Talk, in no sense is any form ever evaluated during module preprocessing. In your proposal, some forms are evaluated. We considered the simple model to be very important. > Some people would also be a little surprised if (defliteral x (+ 1 1)) > was not a legal literal, since it looks like a constant expression. > For these cases, you would have to explain why the literal needs to be > in a syntax module. (+ 1 1) is an expression containing only constants in the sense that I had in mind. So is (/ %pi% 2) when %pi% is defined by defliteral. Is '+' a literal? If so, what about user-defined functions? This problem is definitely harder in EuLisp than in, say, C, because even the core functions are supposed to be redefinable and treated as normal bindings. > Finally, at least in our experience it is not too troublesome to put > defliterals apart simply because most systems already have one or more > compile-time macro modules so no new module is needed just for literals. That's fine when the expressions don't refer to other literals. Needing a module for each level makes this whole approach to modules look wrong. The defglobal solution suggested in the proposal gets around requiring more than 1 level. That's its purpose. -- Harley * write-ups from Harley :From: Bruno Haible > BTW, do we have null-terminated strings in EuLisp? I think it > would be a good idea. Also a way to test for the null char. > > -- jeff > > I agree. Of course, we should be careful to also allow the length to > be explicitly coded in the string. > > -- Harley We shouldn't give up the rule that strings can contains any characters whatsoever. I think #\x0 must be excluded from the type if we want to allow implementations that use null-terminated strings. Never specify that the set of characters has at least 256 elements. Instead the range of numbers nnn for which #\xnnn is meaningful must be restricted. Otherwise we end up with Common-Lisp's distinction between CHARACTER and STRING-CHAR. Bruno Haible * The 80s strike again :From: Jeff Dalton > It doesn't make life easier for me. There's a class of programmers, > which doesn't include programmers like me, that's going to get this > easier life. > > I think I explicitly mentioned that this would help a majority of > programmers. Is this some kind of utilitarianism, or what? (Utilitarianism has the known flaw of allowing nasty things to happen to a few because of benefits to the many.) > Personally, given the choice between helping the > minority of Lisp programmers vs. the vast majority of C/C++ > programmers, I prefer the latter. I would rather benefit programmers by providing a better language than by giving them what they already think they want. Moreover, I regard this as a better long-term strategy, and I suspect you may even agree with it. However, I don't see the point of trying to benefit via programming language design a group of people who have very different ideas than I do about what counts as a good language, especially when there plenty of other people providing the kinds of languages those people prefer. If this means I have to appeal to a minority of C and C++ programmers, so be it. > This necessarily means that the minority is slightly disgruntled. Why does it *necessarily* mean this? > I would have to say, too bad, Jeff, > but I don't really think you would suffer very much, if at all. This is voice of the 80s. Other factors count for nothing, and an essentially economic decision prevails. > The only alternative is to get them to do in Lisp things they wouldn't > do at all otherwise. I find people willing to do so much in C and C++ > that I think the scope for this is small. > > Then you think Lisp doesn't have much future? I think a couple of futures are still open. Lisp could have a future much like its past, in which it is a somewhat exotic AI language used in some kinds of research but only occasionally for something more general. Scheme might survive as an educational language. Another possibility is that Lisp will survive as another tool in the Unix / Windows NT toolbox, as well as having some specialized appications of its own. I don't know what factors are most important elsewhere, such as among Ilog's customers, but here there are a couple of things that work against Lisp. One is that some people, informed by the usual misunderstandings and prejudices about Lisp, decide that they can't use Lisp for a project. I tell them that they're wrong about Lisp. It doesn't have to be big and slow or run only on workstations. But that doesn't do them any good so far as their project is concerned, so they're understandably unconvinced. I need to point them to an implementation that can produce small, efficient programs, that can work with C and the X Window system, etc. Unfortunately, I can't. This is a serious problem, and printf doesn't come into it. Another is that some useful applications start in Lisp but them arrive in C++. (Ilog has done this.) People get the impression that Lisp was inadequate rather than thinking Lisp made it easier to develop the application. Another factor, though less common, is that Lisp doesn't provide good enough support for programming in the large. About a year ago (Nov 92) we had an exchange about modules, libraries, etc. I think Harley and I agreed that something larger than the current modules is required. I also wanted to remove the parens that end up surrounding all modules. I don't remember if anything ever came of this, but I would like to revive the discussion. -- jeff * Harley write-ups: etc :From: Harley Davis > Can someone say what C specifies? (Richard?) Ok (grouping corresponds to that in the C standard): remove(), rename(), tmpfile(), tmpnam() fclose(), fflush(), fopen(), freopen(), setbuf(), setvbuf() fprintf(), fscanf(), printf(), scanf(), sprintf(), sscanf(), vfprintf(), vprintf(), vsprintf() fgetc(), fgets(), fputc(), fputs(), getc(), getchar(), gets(), putc(), putchar(), puts(), ungetc(), fread(), fwrite() fgetpos(), fseek(), fsetpos(), ftell(), rewind() clearerr(), feof(), ferror(), perror() Note that this is *more* than what is in the proposal! Here is what we did: * We dumped all the xgetx, xputx, and fread/fwrite because: 1. If you want that level you can do it in C given that in Talk you can pass a file stream to C and C gets the fd. 2. We didn't intend on sharing buffers anyway. * tmpfile() is not particularly useful when you have tmpnam() and open(). * remove() is already taken by Lisp, so we used unlink(). * freopen() is not very useful. * scanf() is too hard to get right in Lisp. (Do you pass in objects to be modified? Does it return a list of values?) Plus, it's not all that useful given that almost all I/O is done with READ. And, once more, for those rare cases where it's useful, it can be done in C. * the file position functions can be done in C. * The error stuff is handled by the condition and the interface. This avoids explicitly checking for errors at each call and is thus considered an improvement and a better integration with Lisp. * eof is also better handled as a condition than an explicit check. The v*printf() functions are irrelevant (we have rest lists), as are the f* variants of getc() etc (which are just non-macro variants). Right. In addition, we probably *do* want to specify the fillbuf/flushbuf functions (I don't have a POSIX document, but I assume it doesn't). Whether we want set[v]buf depends on how we do that. Well, we didn't put it in since file-streams don't share buffers with C (especially if we target fd's rather than FILE*!) We certainly don't want tcgetattr() and other such POSIXisms. A subset of the terminal handling functions can be useful. Essentially, just the part that manages cbreak mode. Also, isatty() is quite useful. -- Harley * Harley write-ups: etc :From: Harley Davis Date: Tue, 7 Dec 93 17:44:23 GMT :From: Richard Tobin > I'm used to the default idea. > Anyway, what do other people think? Being used to the C/Unix way of doing things, I find Harley's version clearer (expecially if it has a better name). BTW, does the filename stuff allow for HTTP URLs? These look like http://host[:port]/path/path/... Currently, http: would be treated as a device and the rest as a simple directory, without distinguishing the host field. If this is considered important, we could also add a function which extracts the host as a string. -- Harley * printf :From: Harley Davis > Now it looks to me like you're diverging from the POSIX routines > in some significant ways, thought it's not clear exactly what they > are. But once you diverge, having the same names is a somewhat > mixed blessing. > > If you want to go with the sales argument, you can say that EuLisp has > a POSIX binding with improvements, so C programmers using EuLisp have > both a familiar set of functions and a more comfortable environment to > use them. That makes sense. We can say: see how nice even printf would be if it wasn't embedded in a language that can just barely deal with n-arg functions, or something along those lines. Unbelievable. I actually said something that makes sense to Jeff. Can we make this day into a EuLisp holiday? > > I really do believe that printf is > > better than some mutant format which is in any case based on printf. > > Was it based on printf? > > Please read section A.10.3 of EuLisp 0.99, page 54, remarks for the > function format: > > "These formatting directives are intentionally compatible with the > facilities defined for the function fprintf in ISO/EIC 9899 : 1990." I think it's pretty clear that the EuLisp format function is based on MacLisp / CL format, though no doubt we've also held C in mind. (Hence the \n.) Indeed, this is so clear that I thought you must be saying the original format was based on printf. It's based on both printf and the old format, which is why I called it a mutant. I guess "hybrid" would be more charitable (and genetically accurate). > The new directive %A prints all Lisp objects as if printed by prin. > This will obviously handle numbers too. But will the C directives handle these things? That was my point. No, but you get a type test. In addition, I need two Lisp data directives: one the prints escape chars, quote marks around strings, etc; and one that doesn't. In Talk, %A does print those things while %s does not. In other words, %A is rereadable (to the extent that prin is for a given object) while %s is human friendly. > > How many unfortunate consequences could it have? > > Maybe it gets in the way of calling the real printf. Maybe it > rules out too many extensions. Maybe POSIX-binding makes us too > OS dependent. > > Unices are almost all POSIX compliant. > Windows NT has a POSIX compliant module (and better ones available > commercially.) > VMS is now POSIX compliant. > Even DOS has a goodly number of the POSIX functions. Is this suppose to show it doesn't get in the way of calling the real printf or that it doesn't rule out too many extentsions? Or even that it doesn't bring us too close to the OS? It's not just a matter of portability, it's also having losing OS features intrude into the language. You said POSIX was too OS-dependent, and I just tried to show that most OS's today are POSIX compliant, so it is not much of a strike against the proposal that it is based on POSIX rather than something more OS-independent (like CL). > I would be happy to propose a minimal foreign function interface (for > C anyway) if people are interested. I am, and there are some people on the net who would be too, because they've been complaining about the lack of exactly that. From time to time, I try to point such people at EuLisp, and I'd like to be able to point them more effectively. Moreover I'm all the time needing to call C things myself, although I call things like waitpid which don't have the most accommodating interface. I don't think a simple C interface (ie just calling foreign functions and passing/returning data) will fix the problem with waitpid() and friends. Indeed, it may be even worse in Lisp than in C without a means to automatically translate enums and symbolic constants into symbols. And I'm not about to propose a way to do that. However, the interface will facilitate writing the part of the application that deals with waitpid() in C and easily calling it, or calling Lisp from it. Perhaps you misunderstood me. %d and ~d makes it look like a EuLisp printf is a good idea, instantly understandable to both Lisp and C programmers. But that's the best example. There are others that don't make things look so nice. %s and ~s do different things, for instance. %X has a meaning that Lisp programmers won't expect, thought they might get a clue from %e vs %E. And so on. Yes, but most of these programmers already know printf. You, for example, know both. Almost all our customers do mixed language programming already. So the burden is rather small, even nonexistent -- indeed, having just one set of directives to learn would be viewed as a blessing by most. > There are > many important cases that aren't resolved by analogy with C's printf. > > But they're in the proposal. Beside the point. Programmers won't be able to say "Humm. By analogy with C printf, it probably works like this." For most things they will. > In any case this is a very small part of what programmers have to do > when moving between Lisp and C. > > Naturally. Any implementation needs a foreign function interface. As > I said, if people are willing to consider such a thing for the > language (and I think the public would applaud a Lisp with a standard, > even minimal, FFI), I would be happy to start the ball rolling with a > proposal. I'd like to see a proposal with the POSIX stuff simplified and an FFI added. I'll be happy to send along the Talk FFI stuff. I hope you'll be willing to rework it plus the POSIX stuff into a form palatable for inclusion in EuLisp. > Is it because you're using non-POSIX functions or options? It's chiefly because other people can't or didn't write portable code. So this issue is not a problem for the proposal. > Do you program for DOS or Windows 3? I use only Unix, and stick to BSD when I can. The whole Sys V excursion was a big mistake that will give the world to Microsoft. As if it didn't belong to them already... Making the same kind of mistake w.r.t. C++ will be a disaster too. (The mistake is making something have a major impact by thinking it will have a major impact and then acting accordingly, thus bringing about a major impact that could otherwise have been avoided. Meanwhile, some other folk, not so distracted, do something else that wins.) It seems to be a little late for C++; the "mistake" has been made. You can't get gcc without a C++ compiler; you can't buy a PC C compiler from Microsoft, Borland, IBM, or Watcom without C++ too. However, I don't think of it as a disaster but rather an opportunity. -- Harley * Harley write-ups: etc :From: Harley Davis > I'm used to the default idea. It doesn't have to be compatible > with CL's merge-pathnames. Maybe this is OK too, but if so I > think the documentation could do more to suggest a "model" > for understanding what roles the arguments play. The defaulting > idea is a model I find helpful. > > I think it's complicated to understand for beginners, and really > confusing for people used to C (cf Richard's message.) What is it about C or Unix that makes it confusing? It's evidently something I never encountered, and Richard's message didn't say what it was either. Usually, when programming with files in C, you just concatenate strings. The model for merge-pathnames is string concatenation. This is what makes it simpler. You should also know that filenames aren't seen as having fields, as opposed to pathnames. Functions like basename, extension, dirname just process the filename as a string. Filenames are, however, better than strings because 1. you can't create one with a bad syntax, 2. they always use Unix notation regardless of the real OS (like Windows, which uses \ instead of /.), 3. they allow better typed functions. BTW, when I first encountered merging (not in Lisp or Unix, though I can't rememeber for sure where it was), merging with defaults seemed very natural to me. I still don't have a simple model for whatever it is you're doing. String concatenation, the simplest model possible. > Humm. I'm not sure what all the implications of this are. > If I can use printf in both Lisp and C, I'd expect the output to > go to the same place and be interleaved in the obvious way. > Indeed, it's a pain when using C with some Lisps that output > to the same destination is independent. > > EuLisp printf needs to flush immediately for good interleaving. As > far as prin goes, the flushing is explicit (or implicit with print). > This is no worse than C++ streams, and people seem to live with that. My point is this: if we have printf, I want these other things to be true. If they aren't then printf starts being accompanied by pitfalls. I don't want a big pitfall list for EuLisp. (My Common Lisp one is only a few pages, but I've been forgetting to put things in.) I don't think there's a problem for printf in particular because the proposed Lisp version flushes immediately and the C version also does. There is a problem for prin vs. ftell() and company, but I believe it's less serious in practice simply because you use ftell() much less often than printf(). -- Harley * The 80s strike again :From: Harley Davis Date: Tue, 7 Dec 93 20:08:07 GMT :From: Jeff Dalton > It doesn't make life easier for me. There's a class of programmers, > which doesn't include programmers like me, that's going to get this > easier life. > > I think I explicitly mentioned that this would help a majority of > programmers. Is this some kind of utilitarianism, or what? (Utilitarianism has the known flaw of allowing nasty things to happen to a few because of benefits to the many.) Yes, it is utilitarianism, but the minority in this case does not exactly suffer eternal damnation. At the worst, a few programmers who have never used C will have to learn a new set of formatting directives. Call it enlightened utilitarianism. > Personally, given the choice between helping the > minority of Lisp programmers vs. the vast majority of C/C++ > programmers, I prefer the latter. I would rather benefit programmers by providing a better language than by giving them what they already think they want. Moreover, I regard this as a better long-term strategy, and I suspect you may even agree with it. However, I don't see the point of trying to benefit via programming language design a group of people who have very different ideas than I do about what counts as a good language, especially when there plenty of other people providing the kinds of languages those people prefer. If this means I have to appeal to a minority of C and C++ programmers, so be it. Like I have said several times, this proposal could be replaced if anyone had significantly better ideas. However, in the absence of great new ideas, it seems wise to use a popular standard as a basis for discussion. > I would have to say, too bad, Jeff, > but I don't really think you would suffer very much, if at all. This is voice of the 80s. Other factors count for nothing, and an essentially economic decision prevails. No, if you suffered a lot, it would matter. But you don't. In fact, if I may be so daring, you wouldn't suffer at all. Another is that some useful applications start in Lisp but them arrive in C++. (Ilog has done this.) People get the impression that Lisp was inadequate rather than thinking Lisp made it easier to develop the application. Strangely enough, many of our new C++ clients have started spontaneously asking if Ilog didn't have some sort of improved prototyping/rapid development environment which works with C++... Another factor, though less common, is that Lisp doesn't provide good enough support for programming in the large. About a year ago (Nov 92) we had an exchange about modules, libraries, etc. I think Harley and I agreed that something larger than the current modules is required. I also wanted to remove the parens that end up surrounding all modules. I don't remember if anything ever came of this, but I would like to revive the discussion. Not only do we agree with the idea of having larger units, but we have been doing something about it for a couple years now. For the parens, we never had the problem. -- Harley * Harley wirte-ups: defliteral :From: Harley Davis Date: Tue, 7 Dec 93 19:49:08 GMT :From: Jeff Dalton > > What you say is entirely true, and we could adopt that approach. > > Let's then. But is it clear what "that approach" is? (See below.) > > We shouldn't do something just because we can. For example, the fact > that your idea is rather complex to explain is a consideration. I don't think my idea is even slightly complex to explain, especially compared to Talk's defglobal trick. I still don't understand why that works until I think for a minute or so. It is for the functions called during literal evaluation. For instance, (defun foo (x) ...) (defliteral %l% (foo 5)) vs. ... import foo from m1 for execution ... (defliteral %l% (foo 5)) vs. ... import foo from m1 for compilation ... (defliteral %l% (foo 5)) vs. ... foo is defined in a std. lib ... (defliteral %l% (foo 5)) Which works? What's the rule? Ugly cases can be generated at will. > But the workaround (using a defglobal instead of or paired with a > defliteral) is tricky too. > > But it follows an extremely clear model of module processing/execution. If anything, it discredits that model. You should try using it before shitting all over it. > In other words, in Talk, in no sense is any > form ever evaluated during module preprocessing. In your proposal, > some forms are evaluated. We considered the simple model to be very > important. I think my model is simpler overall. Moreover, I think this rule about modules is distorting more and more of the language. That we can't even have a simple way of defining constants is going too far. It's not simpler because you have to add extraneous rules to disambiguate cases like the one above. I'll be interested in seeing the precise statement of your rule so I can find more tough cases to throw at it. We've gone down this route here and the result for us has always been far too messy and complex, which is why we backed out to the current state. If you can somehow combine execution and compilation dependencies in a marvelous way that is both simple and allows the creation of minimal applications, I'll be impressed. But I'm not holding my breath. > (+ 1 1) is an expression containing only constants in the sense that > I had in mind. So is (/ %pi% 2) when %pi% is defined by defliteral. > > Is '+' a literal? If so, what about user-defined functions? This > problem is definitely harder in EuLisp than in, say, C, because even > the core functions are supposed to be redefinable and treated as > normal bindings. Is this so? When did it happen? It's been this way for years. It's even worse: syntax is also redefinable. I'm surprised you're suggesting this solution without understanding EuLisp's module system. People used to be concerend about being able to analyze code. Are you now telling me that when the compiler looks at a module in which + appears it can't tell what + means? It can sort of tell since we specify standard module names. But this means that your rule is going to be even more hairy: The literal value can use data literals, other symbolic literals, and functions defined in the standard modules x, y, and z. (Do you also allow functions from level-1 libraries? Won't the user be frustrated if his own modules can't be used? etc.) If so, then either we've broken the ability to analyze code or the analysis is incredibly wimpy. I completely agree with you. I have always complained about this aspect of the EuLisp module system. However, some people (Dave for instance) want to be able to rename any function so they can, for example, have complete Scheme compatibility. > The defglobal solution suggested in the proposal gets around requiring > more than 1 level. That's its purpose. I know that's its purpose. But far from making this approach look good, it makes it look like it must be based on a mistake. But it's not. -- Harley * Harley write-ups: default handlers :From: Harley Davis Date: Tue, 7 Dec 93 19:32:37 GMT :From: Jeff Dalton > The / conditions are just inherently > continuable. (Or, if you really insist, they are always signaled > continuably since the system signals them that way.) The names suggest requests rather than conditions. sounds more like a condition, and I don't see why it has to be continuable. It has to be continuable because you might be in the middle of a (Lisp) READ, and not continuing could mess up some state. The line counting and dimilar examples came up before. If they require inherently continuable conditions, I'd rather lose the examples. Fine, modify the proposals to take out this part. I think this is an issue we considered several times before and we stayed with the approach we have now. I don't think we should overturn all that as a side effect of adopting a stream proposal. I also don't want to have to spend a lot of time defending those decisions. Is it as essentail part of the stream proposal? I think the proposal could work without it, but I don't have the time to rewrite it. (Indeed, I don't really have the time to continue this exponentially growing discussion, but I will anyway because I think there is a chance of converging.) > That is, I think we should make signalling independent of any > provision for recovery. Default handlers encourage a different > way of thinking in which the signaller has to know what the > default handler does and be prepared to deal with it. > > What's wrong with this way of thinking for these conditions? Why should I have to provide a way to continue? If you want to define a protocol in which a way to continue is normally provided, that may be ok. But if it becomes a property of the type, then pretty soon every case where we can think of a useful way to continue will start requiring this. On bad consequence of this, indicated in my earlier message, is that programmers will signal a less appropriate condition just to avoid dealing with continuing. I don't see how this argument applies here. The programmer doesn't signal these conditions, he just handles them. > Default handlers also, of course, have the usual problems of such global > arrangements, that when different parts of a system have different > requirements it's difficult for them to fit together. > > If there is one default handler which is a unique object I don't see > how this problem arises. What do you mean? Different parts may want different defaults. I don't understand this response. If one part defines a condition class, then it gets to decide the default for that class. Nobody else does. This is an extension of the rule that you can't define a method for a specified gf specializing only on defined classes. > The Le-Lisp experience showed that both dimensions of extensions are > desirable. We could, of course, just say "tough" to the second type > of extension and get rid of the conditions. I think this would be > sad, especially since it's quite difficult to subclass file streams. I would think that one of our aims would be to provide file streams that were reasonably easy to subclass. The problem comes up with the open function. Also, it may not be too useful to subclass them. (In the proposal, modified to replace FILE* with fd, almost everything streamable from POSIX becomes a file stream, including pipes, fifos, sockets, displays, devices, etc.) > What exactly is the argument for the idea that no condition > class is ever inherently continuable? I haven't given one. My argument is more pragmatic and aesthetic than essentialist. However, if I signal a condition, I'm saying that something has occurred. Why should this ever require that I also provide a way to continue? It looks like a separate decision to me. Sometimes yes, sometimes no. Why take a firm stance? With errors it is clearly the signaler who decides, but why should this always be the case? > (In Talk, continuability is > always a question of the protocol between the signaler and the handler > for a given condition class. It is not an explicit notion.) This is better, and I might not mind it if it were sufficiently well confined. But I don't want us to start requiring this in all kinds of cases, and I think it has various problems. I would be curious to know what problems you have in mind. -- Harley * The 80s strike again :From: Jeff Dalton > Yes, it is utilitarianism, but the minority in this case does not > exactly suffer eternal damnation. It's still suspect reasoning. > Like I have said several times, this proposal could be replaced if > anyone had significantly better ideas. What's wrong with that's in there now (or in the past, if it's been deleted)? Anyway, I'd prefer something simple now, leaving some things for additional libraries. > However, in the absence of great new ideas, it seems wise to use a > popular standard as a basis for discussion. Ok. > This is voice of the 80s. Other factors count for nothing, > and an essentially economic decision prevails. > > No, if you suffered a lot, it would matter. What you mean (it seems) is "yes, but so what?" > Strangely enough, many of our new C++ clients have started > spontaneously asking if Ilog didn't have some sort of improved > prototyping/rapid development environment which works with C++... You must be doing something right, then. Good work. > Another factor, though less common, is that Lisp doesn't provide > good enough support for programming in the large. About a year > ago (Nov 92) we had an exchange about modules, libraries, etc. > I think Harley and I agreed that something larger than the current > modules is required. I also wanted to remove the parens that > end up surrounding all modules. I don't remember if anything > ever came of this, but I would like to revive the discussion. > > Not only do we agree with the idea of having larger units, but we have > been doing something about it for a couple years now. For the parens, > we never had the problem. Well, I think EuLisp modules have serious problems. To indicate how serious I think they are, I said (last year) that with the then current module system I would not often use EuLisp by choice. A number of good ideas were suggested at that time, but the usual Keith - Harley - Jeff disagrements prevented a consensus from emerging. I'd like to do something about this, but I don't want to adopt exactly what's in ITalk. I think it would be better to use Eulisp for a distinct design. -- jd * Harley write-ups: default handlers :From: Jeff Dalton > > The / conditions are just inherently > > continuable. (Or, if you really insist, they are always signaled > > continuably since the system signals them that way.) > > The names suggest requests rather than conditions. > sounds more like a condition, and I don't see why it has to be > continuable. > > It has to be continuable because you might be in the middle of a > (Lisp) READ, and not continuing could mess up some state. READ has to be able to not continue after errors, so I don't think this has to be a problem. In any case, I think it's absolutely wrong to overturn carefully considered decisions as a side effect of a stream proposal. It has to be considered separately, and I don't think there's time now to do it properly. > I don't see how this argument applies here. The programmer doesn't > signal these conditions, he just handles them. Can't the programmer define new stream classes or new kinds of buffering? Anyway, the argument is about the general issues of introducing default handlers and making some conditions inherently continuable. > What do you mean? Different parts may want different defaults. > > I don't understand this response. If one part defines a condition class, > then it gets to decide the default for that class. Nobody else does. > This is an extension of the rule that you can't define a method for a > specified gf specializing only on defined classes. This rule only says (at best) who wins. If there's a condition class, different parts of the program may want to handle it in different ways by default. The classes involved may be user-defined as well. (Remember that I'm talking about introducing default handlers, not about having a special case for two conditions with no facility for anything similar for other condition types.) > I would think that one of our aims would be to provide file streams > that were reasonably easy to subclass. > > The problem comes up with the open function. Also, it may not be too > useful to subclass them. (In the proposal, modified to replace FILE* > with fd, almost everything streamable from POSIX becomes a file > stream, including pipes, fifos, sockets, displays, devices, etc.) Then FILE sounds like a misleading name. BTW, if you use fds instead of FILE*s, I still think fopen makes sense as a name, because you're getting a stream, which is the EuLisp analogue for a FILE *, rather than an fd. > > What exactly is the argument for the idea that no condition > > class is ever inherently continuable? > > I haven't given one. My argument is more pragmatic and aesthetic > than essentialist. However, if I signal a condition, I'm saying > that something has occurred. Why should this ever require that > I also provide a way to continue? It looks like a separate decision > to me. > > Sometimes yes, sometimes no. Why take a firm stance? With errors it > is clearly the signaler who decides, but why should this always be the > case? I'm not trying to rule it out forever, if sufficiently good reasons to do something else come along. But once we start requiring continue handlers (for lack of a better name) it will be difficult to go back. > > (In Talk, continuability is > > always a question of the protocol between the signaler and the handler > > for a given condition class. It is not an explicit notion.) > > This is better, and I might not mind it if it were sufficiently well > confined. But I don't want us to start requiring this in all kinds of > cases, and I think it has various problems. > > I would be curious to know what problems you have in mind. I tried to explain in my message. It's kind of frustrating to get a reply like this, because it looks like everything I said had no effect. -- jd * Harley write-ups: etc :From: Jeff Dalton > What is it about C or Unix that makes it confusing? It's evidently > something I never encountered, and Richard's message didn't say what > it was either. > > Usually, when programming with files in C, you just concatenate > strings. The model for merge-pathnames is string concatenation. This > is what makes it simpler. > > You should also know that filenames aren't seen as having fields, as > opposed to pathnames. Doesn't matter for the point I'm making here. > [...] > String concatenation, the simplest model possible. Well, when I look at the proposal, string concatenation is not what springs to mind. In one case, the directory part of the first arg is used as a default, in another it's concatenated onto the directory of the 2nd arg (but not to the front of the entire 2nd arg); for devices, there's an exclusion rule instead of any concatenation; and it's not always clear what happens to extensions. -- jd * Harley wirte-ups: defliteral :From: Jeff Dalton > I don't think my idea is even slightly complex to explain, especially > compared to Talk's defglobal trick. I still don't understand why > that works until I think for a minute or so. > > It is for the functions called during literal evaluation. For instance, > > (defun foo (x) ...) > > (defliteral %l% (foo 5)) But this isn't "why that [the defglobal trick] works". (I know why it works, but repeatedly find it counterintuitive.) > > But it follows an extremely clear model of module processing/execution. > > If anything, it discredits that model. > > You should try using it before shitting all over it. But I have tried it. I've used Ilog Talk modules and (via FEEL) EuLisp ones. I also implemented Eulisp modules once upon a time. > I think my model is simpler overall. Moreover, I think this rule > about modules is distorting more and more of the language. That > we can't even have a simple way of defining constants is going > too far. > > It's not simpler because you have to add extraneous rules to > disambiguate cases like the one above. I'll be interested in seeing > the precise statement of your rule so I can find more tough cases to > throw at it. So far I haven't see any tough case for my rule, only something that indicates Eulisp modules may have made analysis effectively impossible short of looking at the whole program. > We've gone down this route here and the result for us has always been > far too messy and complex, which is why we backed out to the current > state. If you can somehow combine execution and compilation > dependencies in a marvelous way that is both simple and allows the > creation of minimal applications, I'll be impressed. But I'm not > holding my breath. Every language I have ever heard of that allows literal constants to be defined allows the definitins to refer to other literals defined in the same module (or equiv structure). > > (+ 1 1) is an expression containing only constants in the sense that > > I had in mind. So is (/ %pi% 2) when %pi% is defined by defliteral. > > > > Is '+' a literal? If so, what about user-defined functions? This > > problem is definitely harder in EuLisp than in, say, C, because even > > the core functions are supposed to be redefinable and treated as > > normal bindings. > > Is this so? When did it happen? > > It's been this way for years. It's even worse: syntax is also > redefinable. > > I'm surprised you're suggesting this solution without understanding > EuLisp's module system. What about the module system do I not understand? I think I understand all too well that it has a number of losing features. I just thought we hadn't gone so far as allowing assignment to core names. Renaming is a different matter, about which more below. Assignment across module boundaries is a bad idea in general, in my view, because the compiler has to look at all client modules to tell whether an exported name is subject to assignment. This makes it kind of difficult to compile at all, as compilation is usually understood. > People used to be concerend about being able to analyze code. Are > you now telling me that when the compiler looks at a module in > which + appears it can't tell what + means? > > It can sort of tell since we specify standard module names. But this > means that your rule is going to be even more hairy: The literal value > can use data literals, other symbolic literals, and functions defined > in the standard modules x, y, and z. (Do you also allow functions > from level-1 libraries? Won't the user be frustrated if his own > modules can't be used? etc.) Users will be pretty frustrated if they can't do things like (defliteral %pi-over-two% (/ %pi% 2)) without defining two levels of module or using the defglobal trick. That we need the concept "can be evaluated at compile time" doesn't sound that bad to me. I understand it, anyway, and simple cases will be obvious to all. > If so, then either we've broken the ability to analyze code or the > analysis is incredibly wimpy. > > I completely agree with you. I have always complained about this > aspect of the EuLisp module system. However, some people (Dave for > instance) want to be able to rename any function so they can, for > example, have complete Scheme compatibility. Renaming doesn't bother me too much, although it makes it difficult for someone to tell what's going on by local inspection. The compiler can figure out what's called what (and what, say, + means) by looking at the module definition and at what "server" modules (w.r.t. the module being compild) have exported. But if some random module that uses a core module can just say (setq car foo), we're in trouble. > > The defglobal solution suggested in the proposal gets around requiring > > more than 1 level. That's its purpose. > > I know that's its purpose. But far from making this approach look > good, it makes it look like it must be based on a mistake. > > But it's not. Is so. -- jd * Defliteral vs defconstant :From: Jeff Dalton > > Is '+' a literal? If so, what about user-defined functions? This > > problem is definitely harder in EuLisp than in, say, C, because even > > the core functions are supposed to be redefinable and treated as > > normal bindings. > Is this so? When did it happen? > It's been this way for years. It's even worse: syntax is also > redefinable. > I'm surprised you're suggesting this solution without understanding > EuLisp's module system. I just got a copy of the 0.99 definition onto the machine at home to look up exactly what the rules are. I'd also thought about defliteral, noted that there was no point in defining one that wasn't exported, and though about a more general mechanism based on constant/mutable bindings. It turned out that this is more or less what EuLisp has now, which is pretty much what I thought it had. Ordinary function definitions can't be changed by assignment in random modules; defconstant can be used to define a name that can't be assigned to. Compiler optimizations can do the rest; so I don't think defliteral is needed. BTW, I've long since accepted that macros aren't going to be usable in the module in which they're defined in EuLisp; so that more general issue (general because literals are analogous to zero- parameter macros) is not in dispute. -- jd * Defliteral vs defconstant :From: Harley Davis Date: Thu, 9 Dec 93 03:21:24 GMT :From: Jeff Dalton I just got a copy of the 0.99 definition onto the machine at home to look up exactly what the rules are. I'd also thought about defliteral, noted that there was no point in defining one that wasn't exported, and though about a more general mechanism based on constant/mutable bindings. It turned out that this is more or less what EuLisp has now, which is pretty much what I thought it had. Ordinary function definitions can't be changed by assignment in random modules; defconstant can be used to define a name that can't be assigned to. Compiler optimizations can do the rest; so I don't think defliteral is needed. I am opposed to any language feature which requires a smart compiler. Either we require the processing to be done at compile-time, which means either an interpreter or smart compiler in the compilation environment, or the literal calculation is optional. If it requires processing of a form in the compilation environment, it would be the only such case and make implementations harder. If it's optional, it loses the guarantee of Talk's defliteral, which is that the literal is really substituted where it is referenced. The simplicity and attractiveness (to us, anyway) of defliteral arises primarily from the fact that it's a no-brainer and fits in easily with the basic module processing view I've outlined before. (ie, nothing evaluated in a module being compiled.) -- Harley * Streams :From: Richard Tobin Let's see if we can find some consensus on the streams issue. Starting at the top and working downwards: printf vs format I don't think this is much of an issue. I believe Jeff will go along with printf if the rest of the system is OK. read, print etc I think these are uncontroversial. If we want to have streams that are, say, queues of lisp objects, this is a level at which the user must be able to add extensions, so these should be generic functions that discriminate on the stream (or they should be wrappers for such functions). getchar, putchar etc We should have these. The standard methods for read and print should call them, or behave as if they do (see below). Previous proposals have had these be generic functions so that the user can implement new types of character stream; this is inefficient and it seems to me to be the main advantage of Harley's proposal that it overcomes this. If we adopt Harley's approach, getchar is just a simple function that extracts the next character out of a buffer. It can be compiled inline (perhaps) or read can just get the character itself. The extensibility is provided by allowing the user to provide the functions to fill/empty the buffer. How the buffer filling/emptying is done is the controversial point. In Ilog Talk, this is done by signalling a condition. Jeff has objected to this because it either requires default handlers (which we have rejected), or some special purpose mechanism. An alternative is just to have generic functions that fill and flush the buffers. This doesn't allow the user to add functionality to an existing stream instance (eg to count characters) but it does allow him to create a new class of stream that does this. And it would be possible wrap such a stream around an existing instance of a stream (the fill-buffer method would just repeatedly call getchar on the existing stream). I propose that we adopt the generic function approach. It occurs to me that we can specify a bit more and make this even more like C/Unix. Instead of providing fill/flush buffer functions, we can have something corresponding to Unix's file descriptors. These would support reading and writing a block of characters. I can't immediately think of a good name but let's call them ustreams for now (note that they're not a subclass of stream - they don't support any of the normal stream operations). The operations on ustreams would be: (ustream-open path mode) -> ustream (ustream-close ustream) (ustream-read ustream) -> string (ustream-write ustream string) (ustream-seek ustream position) -> position There would now be three layers to the system: read, print (Unique to Lisp) -------------------------- fopen, fclose, fprintf, getc, putc, fseek (Correspond to C's stdio) -------------------------- ustream-* (Correspond to Unix system calls) Read and print apply to all streams. Their default methods use getc and putc, or appear to. Extending to non-character streams is done by defining new methods for read and print. Fprintf etc apply only to character streams and are not generic. They fill and flush their buffers by calling the ustream functions. The ustream functions are generic. Extending to new kinds of character stream is done by defining new methods on ustream-*. (Note that in this approach I'm including what we previously called integer streams as a kind of character stream. Maybe we should refer to "buffered-byte-streams" instead. put-integer would work [as if] by decomposing the integer into bytes and calling putc repeatedly.) -- Richard * Eulisp 0.99 syntax definitions :From: Jeff Dalton Where does the new notation for syntax definitions come from? Why did you pick this rather than the more traditional style you used before? It's interesting. I'm not sure whether I like it or not. -- jeff * Defliteral vs defconstant :From: Jeff Dalton > I am opposed to any language feature which requires a smart compiler. It doesn't require a very smart compiler. Pre-evaluating some expressinos at compile time is pretty standard stuff. Moreover no special tricks whatsoever are required for the case where a defconstant is exported, which is the only useful case for defliteral. I think this approach is simpler, easier to understand, and closer to what other languages do. Moreover, it requires no changes to the definition. BTW _were_ you saying that + and the like could be assigned to? -- jeff * Streams :From: Jeff Dalton > Let's see if we can find some consensus on the streams issue. I think something somewhere in this area wouild be reasonable. In any case, I'd like to retain the buffer-based approach that Harley suggested, since it looks like an excellent way to avoid the problem of efficient but flexible streams. Much better than the generic read-char ideas we were messing with before (though generic read-char can be fast in the standard case, it requires complex or duplicated code and isn't fast when the user starts defining stream classes.) I'd also like to see a simple foreign function interface and (this is in a different area) a way to define new function classes. Is this already possible in EuLisp? If not, or if it's too hard to use, the approach taken in Ilog Talk looked reasonable to me. -- jeff * Defliteral vs defconstant :From: Harley Davis Date: Mon, 13 Dec 93 10:58:16 GMT :From: Jeff Dalton > I am opposed to any language feature which requires a smart compiler. It doesn't require a very smart compiler. Pre-evaluating some expressinos at compile time is pretty standard stuff. Which expressions? Moreover no special tricks whatsoever are required for the case where a defconstant is exported, which is the only useful case for defliteral. I don't understand. I think this approach is simpler, easier to understand, and closer to what other languages do. Moreover, it requires no changes to the definition. You still haven't explained exactly what constitutes a constant expression for EuLisp, and whether the compiler (LPU) is required to evaluate them or not. If the compiler is not required to evaluate such constant expressions, then defliteral is orthogonal to defconstant (and I would even suggest that defconstant is not very useful given deflocal.) BTW _were_ you saying that + and the like could be assigned to? No, we decided a long time ago that defining forms made constant bindings. But what constitutes exactly the "and the like" for your proposal? -- Harley * Streams :From: Julian Padget I'm not sure when Dave is going to send it out, but he and I spent some time working on streams at GMD the week before last. It could satisfay the "something somewhere in this area" criterion! It extends Harley's model to support streams of objects where the source or sink is not a file, but it retains the non-generic buffered approach for character input/ouptut. Dave was last seen reworking the streams section of 0.99 to describe this scheme and he sent me mail on Friday to say it was nearly done. --Julian. * Streams :From: Jeff Dalton > I'm not sure when Dave is going to send it out, but he and I spent > some time working on streams at GMD the week before last. It could > satisfay the "something somewhere in this area" criterion! It extends > Harley's model to support streams of objects where the source or sink > is not a file, but it retains the non-generic buffered approach for > character input/ouptut. Does anyone have a view on Richard's suggestion that there be an analogue of the FILE * / fd distinction (ustreams)? Ustreams were the basic sources and sinks. -- jeff * Defliteral vs defconstant :From: Jeff Dalton > Date: Mon, 13 Dec 93 10:58:16 GMT > :From: Jeff Dalton > > > I am opposed to any language feature which requires a smart compiler. > > It doesn't require a very smart compiler. Pre-evaluating > some expressinos at compile time is pretty standard stuff. > > Which expressions? Typically, arithmetic. Only certain kinds of values make sense, because you're anticipating the value it will have when the program gets going. > Moreover no special tricks whatsoever are required for the case > where a defconstant is exported, which is the only useful case > for defliteral. > > I don't understand. I assumed some context from my earlier message. A defliteral in M has to be exported and used in other modules M'. That same case for defconstant requires no compile-time evals or tricky optimizations. That is, if I do a defconstant in M and import it from M to M', how is this different from having done a defliteral in M? I don't see why it has to be interestingly different. The constant has a value that cannot change, so it can be in-lined etc. > I think this approach is simpler, easier to understand, and closer > to what other languages do. Moreover, it requires no changes > to the definition. > > You still haven't explained exactly what constitutes a constant > expression for EuLisp, and whether the compiler (LPU) is required to > evaluate them or not. I'm happy to leave that up to implementations, just as I'm happy to leave in-lining generally up to implementations. However, a constant expression is basically an expression whose value can't change. > If the compiler is not required to evaluate such constant expressions, > then defliteral is orthogonal to defconstant (and I would even suggest > that defconstant is not very useful given deflocal.) In my view, defconstant subsumes all useful instances of defliteral. It also has other uses. Why should I be able to get a constant binding only via defun and friends? > BTW _were_ you saying that + and the like could be assigned to? > > No, we decided a long time ago that defining forms made constant > bindings. But what constitutes exactly the "and the like" for your > proposal? I was wondering what _you_ were saying, so maybe you'll have to fill that in, if you want. However, I meant at least the names defined by EuLisp (as renamed by module madness, etc). For instance, if someone imports the eulisp level 0 module w/o renaming, can the compiler tell that = and car have their usual meanings? -- jd * Streams :From: Dave De Roure > I'm not sure when Dave is going to send it out, but he and I spent > some time working on streams at GMD the week before last. It could > satisfay the "something somewhere in this area" criterion! It extends > Harley's model to support streams of objects where the source or sink > is not a file, but it retains the non-generic buffered approach for > character input/ouptut. Dave was last seen reworking the streams > section of 0.99 to describe this scheme and he sent me mail on Friday > to say it was nearly done. I'm modifying it in the light of (some of) the discussion - I've also tried to produce code to demonstrate that we can do (some of) the things we want with it. I'll post the amended definition text for comment tomorrow (v. busy today). -- Dave * FFI proposal :From: Richard Tobin A few comments on the FFI. I don't think it's reasonable to include anything in the language that requires a conservative GC. The conversion of streams to file descriptors only makes sense for POSIX systems. We should say that it produces an integer under POSIX, but may produce something else in other systems. I assume the number on the end of DEFINTERN is the number of arguments, though this is not explicitly stated. If so, why not make it an argument of the macro instead? Remember you can do #define DEFINTERN(cname, lname, args) DEFINTERN##args(cname, lname) I don't think the lack of computed access to functions by name is a serious problem. Many implementations will have such access internally (at least for exported functions). Others (eg those in which all module linking is done statically) may have to provide some kind of linker. If you want to resolve it in the language, there could be a module syntax for exporting functions to foreign languages or a dynamic mechanism for puttint lisp functions into a table that could be accessed from C. Should the name of the lisp function be a C string rather than just the name? If it's just the name, the macro can stringify it if necessary, or munge it into some magic name to be recognised by the linker. It can't destringify it however, so having it be a string rules out the second alternative. (Of course, if it can contain characters invalid in C identifiers it can't do that anyway.) -- Richard * FFI proposal :From: Harley Davis In article Richard Tobin writes: A few comments on the FFI. I don't think it's reasonable to include anything in the language that requires a conservative GC. OK, as I stated in the introduction, the only difference is eliminating the things which reference ptr and require
for returning non-Lisp pointers. Personally, I don't care one way or another on this issue, but it looks currently like conservative GC's are carrying the day. The conversion of streams to file descriptors only makes sense for POSIX systems. We should say that it produces an integer under POSIX, but may produce something else in other systems. Where "something else" is of course useful as a stream... I assume the number on the end of DEFINTERN is the number of arguments, though this is not explicitly stated. If so, why not make it an argument of the macro instead? Remember you can do #define DEFINTERN(cname, lname, args) DEFINTERN##args(cname, lname) Argument accepted. I don't think the lack of computed access to functions by name is a serious problem. Many implementations will have such access internally (at least for exported functions). Others (eg those in which all module linking is done statically) may have to provide some kind of linker. If you want to resolve it in the language, there could be a module syntax for exporting functions to foreign languages or a dynamic mechanism for puttint lisp functions into a table that could be accessed from C. Perhaps the C syntax should mention a module and a function name to allow more implementation possibilities. The restriction would be that the module named must the one defining the function in question, rather than one which might import the function (and possibly rename it locally). So: DEFINTERN(cname, lmodule, lname, args). Should the name of the lisp function be a C string rather than just the name? If it's just the name, the macro can stringify it if necessary, or munge it into some magic name to be recognised by the linker. It can't destringify it however, so having it be a string rules out the second alternative. (Of course, if it can contain characters invalid in C identifiers it can't do that anyway.) And it can, so there you are. That's the reason. (Simple example: the character '-'.) Alternatively, we could have a sort of extern "C" form in Lisp, in which the external identifiers were limited to C syntax. I personally think this would be unwieldy; DEFINTERN is much easier to use and it's in the right place. -- Harley * Streams :From: Dave De Roure > I'm not sure when Dave is going to send it out, but he and I spent > some time working on streams at GMD the week before last. It could > satisfy the "something somewhere in this area" criterion! It extends > Harley's model to support streams of objects where the source or sink > is not a file, but it retains the non-generic buffered approach for > character input/ouptut. Dave was last seen reworking the streams > section of 0.99 to describe this scheme and he sent me mail on Friday > to say it was nearly done. Right. Working this stuff into the definition, and writing code to demo it, has been a useful nightmare, full of those beasties that populate stream-hell... Basically we perceived the same consensus as Richard summarised, ie character or file streams with specific operations, an abstract stream class, and generic stream operations with methods for character streams. I need some help from you all on a few issues that have emerged: The default handler approach to buffer fill and flush operations is attractive because it gives the programmer two orthogonal techniques: they can put methods on the generic function (essentially a `static' approach at level 0) or they can introduce new handlers (a `dynamic' approach). After discussion at GMD, Julian and I both became convinced of the handler solution. Since we don't have default handlers I had decided, in the spirit of compromise, to specify the generic function solution in the definition, thinking this admits the handler solution as a possible implementation technique. But I'm no longer sure we can postpone this decision - I think the user should know if there are conditions being raised. Incidentally, there is a third (orthogonal?) approach, which is to specify the fill/flush functions by passing them as arguments when the stream is created. Q. Should the generic fill/flush functions be called directly or via a default handler? Ilogtalk doesn't have seek or unget operations. I propose that we support these. In fact, I propose this rationale: we should have a standard stream protocol (inc seek, unget etc) which could be applied to any stream, and if an operation is unsupported for a particular stream type, a condition is raised. This is to do away with the added complexity of seekable streams (since you often don't know whether a stream is seekable or not until you try it). Q. Should we support seek and unget operations? What of input-and-output streams? We once had a mechanism for taking two streams and combining them into a single stream object which could respond to both input and output operations. Q. Should we support combined i-o streams in the definition? We spent some time at GMD checking that the new streams would fit into the definition wrt threads and collections. To do this neatly, we created the class alongside the (aka ) class, with the intention that these streams would basically be queues of objects (compatible with generic stream operations and with collections). However, we didn't work this through fully. Our file-streams are basically end-points (streams connected to some external data source/sink); are the object streams also end-points (which can be connected together, a la sockets)? For the purposes of integrating collections with streams, I think what we actually need is a class. Like the combined streams above, one of these objects responds to input and output operations. Q. Shall we introduce a class in the definition? Finally, I have some sympathy with Richard's comment about URLs. We could lead the field here by introducing a {file,path}name mechanism which accommodates the extra information (probably just host or address) needed by URLs and many other network-oriented data services (eg in my own work, my filename objects have these slots: host, protocol/drive, dirname, basename, extension). If we have an abstract filename class then these extra slots can be added. I'll not make a specific question of this, but I would be interested in your opinion. When we have some preliminary agreement on these answers, I'll send out the corresponding definition-speak as a focus for further discussion. Bye for now, -- Dave * Streams :From: Harley Davis Just a couple points, maybe more later... In article Dave De Roure writes: The default handler approach to buffer fill and flush operations is attractive because it gives the programmer two orthogonal techniques: they can put methods on the generic function (essentially a `static' approach at level 0) or they can introduce new handlers (a `dynamic' approach). After discussion at GMD, Julian and I both became convinced of the handler solution. Since we don't have default handlers I had decided, in the spirit of compromise, to specify the generic function solution in the definition, thinking this admits the handler solution as a possible implementation technique. But I'm no longer sure we can postpone this decision - I think the user should know if there are conditions being raised. Incidentally, there is a third (orthogonal?) approach, which is to specify the fill/flush functions by passing them as arguments when the stream is created. Q. Should the generic fill/flush functions be called directly or via a default handler? We use the conditions for all sorts of interesting hacks. For example, our pretty printer is based on trapping the flush-buffer condition and not calling the gf in certain cases. (The idea is, first you try to print a whole expression on one line. If that fails -- ie, if flush-buffer is called while printing the expression b/c right margin is passed -- then take the subexpressions and print them one per line, indented appropriately.) This application is dynamic control of all stream types, and ignores the details of the protocol. Many other examples could be cited. So I like the condition approach (of course.) However, I don't think it is really problematic to just specify the gf's and leave the conditions as an implementation approach, if the distrust of default handlers is widespread. Is anybody else against default handlers on principle? Ilogtalk doesn't have seek or unget operations. I propose that we support these. In fact, I propose this rationale: we should have a standard stream protocol (inc seek, unget etc) which could be applied to any stream, and if an operation is unsupported for a particular stream type, a condition is raised. This is to do away with the added complexity of seekable streams (since you often don't know whether a stream is seekable or not until you try it). Q. Should we support seek and unget operations? We decided not to support them because the interaction with buffering is hard to explain. This is more problematic in this stream system than with FILE*'s because 1. the buffer is an accessible object, and 2. the high level operations (ie read) may not want to be always checking that the file pos hasn't changed. What of input-and-output streams? We once had a mechanism for taking two streams and combining them into a single stream object which could respond to both input and output operations. Q. Should we support combined i-o streams in the definition? No need; all streams in this proposal support both I/O. For file streams, whether you can do one, the other, or both depends on the mode passed to fopen. For other types of streams, it's left to the stream author to control this. We spent some time at GMD checking that the new streams would fit into the definition wrt threads and collections. To do this neatly, we created the class alongside the (aka ) class, with the intention that these streams would basically be queues of objects (compatible with generic stream operations and with collections). However, we didn't work this through fully. Our file-streams are basically end-points (streams connected to some external data source/sink); are the object streams also end-points (which can be connected together, a la sockets)? For the purposes of integrating collections with streams, I think what we actually need is a class. Like the combined streams above, one of these objects responds to input and output operations. Q. Shall we introduce a class in the definition? I don't think this is necessary. Object streams can just be created by default to do both I/O. Finally, I have some sympathy with Richard's comment about URLs. We could lead the field here by introducing a {file,path}name mechanism which accommodates the extra information (probably just host or address) needed by URLs and many other network-oriented data services (eg in my own work, my filename objects have these slots: host, protocol/drive, dirname, basename, extension). If we have an abstract filename class then these extra slots can be added. I'll not make a specific question of this, but I would be interested in your opinion. This is not exactly leading the field; it's getting back to the complicated pathname mechanism. What we like about filenames vs. pathnames is that they encapsulate simple strings and use simple string processing operations to extract and combine information; these operations are based on pre-existing Unix commands. They don't have slots, and are not mutable. They also have a fixed syntax across OS's. This eliminates most of the major hassle of using CL-style pathnames. If all of these fields are needed, perhaps filenames are the wrong abstraction for EuLisp. -- Harley * Streams :From: Julian Padget Date: Thu, 16 Dec 1993 11:56:10 +0100 :From: Dave De Roure [...] I need some help from you all on a few issues that have emerged: The default handler approach to buffer fill and flush operations is attractive because it gives the programmer two orthogonal techniques: they can put methods on the generic function (essentially a `static' approach at level 0) or they can introduce new handlers (a `dynamic' approach). After discussion at GMD, Julian and I both became convinced of the handler solution. Since we don't have default handlers I had decided, in the spirit of compromise, to specify the generic function solution in the definition, thinking this admits the handler solution as a possible implementation technique. But I'm no longer sure we can postpone this decision - I think the user should know if there are conditions being raised. Incidentally, there is a third (orthogonal?) approach, which is to specify the fill/flush functions by passing them as arguments when the stream is created. Q. Should the generic fill/flush functions be called directly or via a default handler? I'm in favour of the combination of handler and generic function because of the flexibility it provides. The issue that remains is how I justify changing my opinion to accept default handlers. I remember our talking for a long time about default handlers a few years ago. I don't want to get into a debate about whether I mis-remembered, but I think we finally finessed the issue on the grounds that we had no errors for which there was any useful default treatment other than either displaying it and perhaps entering a debugger or a break loop and that was something felt to be part of the "environment" and not to be specified. Since then our ideas on threads have crystallized significantly and I think this has some bearing on the matter. The proposal talks of a single global handler, but I suspect that is going to be inconvenient in a parallel world. We have also edged towards the recognition of a per-thread handler in the treatment of unhandled conditions on threads (the aborted state). So it seems to me that we have already almost accepted default-handlers---we just do not provide access to them and we still need not, although we can define additional default behaviour for them. Assuming we did agree to the handler + gf approach there are a number of nasty beasties lurking: does each thread use the fill and flush gfs directly or does each get a copy of the gf? If the latter, how do we add methods to the thread-specific gfs? If we do allow access to a thread's default handler, should it be a no arg function which returns the handler for the current thread or should it be possible easily to access (and therefore mess with) another thread's handler? (Of course, a thread could always pass it's handler on anyway; it's just a question of whether it can be taken or must be given). My preferred solution is for there to be a single gf for each of fill and flush which each thread uses (therefore any new methods potentially affect all threads) and that we state the existence of a default-handler (a generic function) for each thread which, by default, has a method for fill-buffer and flush-buffer which call fill and flush respectively. I am undecided about whether to make this object accessible, but am tending towards that view, in which case I prefer there to be a function called default-handler, of no arguments, which returns a gf. Ilogtalk doesn't have seek or unget operations. I propose that we support these. In fact, I propose this rationale: we should have a standard stream protocol (inc seek, unget etc) which could be applied to any stream, and if an operation is unsupported for a particular stream type, a condition is raised. This is to do away with the added complexity of seekable streams (since you often don't know whether a stream is seekable or not until you try it). Q. Should we support seek and unget operations? I sympathize with the second point Harley makes. Is the first one suggesting that the program might mutate the buffer object thus making unget and seek difficult to implement? Sounds a more heinous crime than having unsynchronized high (read) and low (getc) level operations on the same stream. From this point on I got confused (io-streams, fifo-streams) since I thought we had gone through all this in sufficient detail to be pretty sure that it all worked. In particular, I thought we had concluded that object streams could do everything we wanted (having gone through the recognition of end-points etc.). What happened to change your mind? Finally, I have some sympathy with Richard's comment about URLs. We could lead the field here by introducing a {file,path}name mechanism which accommodates the extra information (probably just host or address) needed by URLs and many other network-oriented data services (eg in my own work, my filename objects have these slots: host, protocol/drive, dirname, basename, extension). If we have an abstract filename class then these extra slots can be added. I'll not make a specific question of this, but I would be interested in your opinion. It sounds cute, but I'd like to leave it out of the definition for now---we can provide it as an add on in an implementation by subclassing---so perhaps we do need an abstract class for filename. --Julian. * Streams :From: Harley Davis In article Julian Padget writes: Q. Should the generic fill/flush functions be called directly or via a default handler? My preferred solution is for there to be a single gf for each of fill and flush which each thread uses (therefore any new methods potentially affect all threads) and that we state the existence of a default-handler (a generic function) for each thread which, by default, has a method for fill-buffer and flush-buffer which call fill and flush respectively. I am undecided about whether to make this object accessible, but am tending towards that view, in which case I prefer there to be a function called default-handler, of no arguments, which returns a gf. How about thread-default-handler (w/setter) whose default value is a global single handler gf named default-handler? Ilogtalk doesn't have seek or unget operations. I propose that we support these. In fact, I propose this rationale: we should have a standard stream protocol (inc seek, unget etc) which could be applied to any stream, and if an operation is unsupported for a particular stream type, a condition is raised. This is to do away with the added complexity of seekable streams (since you often don't know whether a stream is seekable or not until you try it). Q. Should we support seek and unget operations? I sympathize with the second point Harley makes. Is the first one suggesting that the program might mutate the buffer object thus making unget and seek difficult to implement? Sounds a more heinous crime than having unsynchronized high (read) and low (getc) level operations on the same stream. As long as the buffer is a simple string, I don't see how to prevent modifying it in a handler for fill or flush, and furthermore such could be useful and not necessarily heinous. For example, a handler method on flush which writes a line number into the output buffer after flushing the previous buffer: (defmethod line-count-handler ((c ) ...) (call-next-handler) ; how is this done in EuLisp? (let ((stream (io-condition-stream c))) (prin (incf (stream-output-line stream)) stream))) Another point is that ungetc is largely unnecessary because of peek-next-char, which covers most of the needs and is usually what you want anyway. There are only a few odd hacks which use ungetc differently, and these mostly involve modifying read tables for new parsers, which is not a supported operation in EuLisp. Finally, if the FFI is accepted, any random unsupported operation can be done in C code if it is really important to a particular application. -- Harley * Streams :From: Dave De Roure > > >From this point on I got confused (io-streams, fifo-streams) since I > thought we had gone through all this in sufficient detail to be pretty > sure that it all worked. In particular, I thought we had concluded > that object streams could do everything we wanted (having gone through > the recognition of end-points etc.). What happened to change your > mind? Well, I tried coding it, and this raised some questions. In the light of the recent discussion, I'll do it again - I think I may have managed to confuse myself by adapting what we had before rather than starting some of it afresh. I wish I had more time to spend on it! Anyway, assuming they're viable, the question about whether we put these streams in the definition, or just satisfy ourselves that they can be done, still stands. > If we have an abstract filename class then these extra slots > can be added. I'll not make a specific question of this, but > I would be interested in your opinion. > > It sounds cute, but I'd like to leave it out of the definition for > now---we can provide it as an add on in an implementation by > subclassing---so perhaps we do need an abstract class for filename. Exactly my point. -- Dave * Streams :From: Richard Tobin > However, I don't think it is really problematic to just specify the > gf's and leave the conditions as an implementation approach, I think we should do this. The conditions could be compatibly added later. If, as Julian suggests, there is some interaction with streams it makes me even more inclined to go with generic functions. > Q. Should we support seek and unget operations? > > We decided not to support them because the interaction with buffering > is hard to explain. Surely we have to support seek? I don't mind losing unget, since we have peek-char (though I would have thought an advantage of the buffer approach was that it was relatively easy to allow unlimited pushback). C doesn't have any trouble with fseek(), so surely we can do it. > Q. Should we support combined i-o streams in the definition? > > No need; all streams in this proposal support both I/O. I'm happy with this. > Finally, I have some sympathy with Richard's comment about URLs. > This is not exactly leading the field; it's getting back to the > complicated pathname mechanism. You're probably right about this. It would be nice if they just fitted in to the scheme, but I don't think we should worry about it at this stage. -- Richard * Streams :From: Harley Davis In article Richard Tobin writes: > Q. Should we support seek and unget operations? > > We decided not to support them because the interaction with buffering > is hard to explain. Surely we have to support seek? I don't mind losing unget, since we have peek-char (though I would have thought an advantage of the buffer approach was that it was relatively easy to allow unlimited pushback). C doesn't have any trouble with fseek(), so surely we can do it. What happens in C with fseek when you've done an operation to the buffers w/o flushing? Lewine's book which I have sitting here notes that any effects of ungetc are ignored, but doesn't talk about flushing. Lisp print & read have much more complex state than anything in C, and seek might be called in the middle of printing/reading objects (eg while calling the gf prin-object, or while reading a structure defined with #s if we have such a thing.) Either we say that such calls to seek are ignored until some precipitous moment, or print/read have to verify that the position remains constant across such calls. Other complex examples come to mind, but let's see if this example can be dealt with. -- Harley * Streams :From: Richard Tobin > What happens in C with fseek when you've done an operation to the > buffers w/o flushing? If the operation was a write, it flushes it. (If you seek to somewhere that happens to be in the buffer you don't have to do this, at least when reading.) > Lewine's book which I have sitting here notes that any effects of > ungetc are ignored Hmm, you're right. I had thought you were only allowed to ungetc the "right" character (ie the one you read), in which case the point would be moot, but in fact you seem to be able to ungetc any character. The standard specifies that the external storage is not changed. This is easiest to implement with a separate ungetc buffer. > Lisp print & read have much more complex state than anything in C, and > seek might be called in the middle of printing/reading objects (eg > while calling the gf prin-object, or while reading a structure defined > with #s if we have such a thing.) Either we say that such calls to > seek are ignored until some precipitous moment, or print/read have to > verify that the position remains constant across such calls. Or we have to say "undefined". I can't see a valid reason for doing such a thing. I really don't think we should allow it. (Does any other Lisp?) -- Richard * Streams :From: Jeff Dalton > I'm in favour of the combination of handler and generic function > because of the flexibility it provides. The issue that remains is how > I justify changing my opinion to accept default handlers. > > I remember our talking for a long time about default handlers a few > years ago. I don't want to get into a debate about whether I > mis-remembered, but I think we finally finessed the issue on the > grounds that we had no errors for which there was any useful default > treatment other than either displaying it and perhaps entering a > debugger or a break loop and that was something felt to be part of the > "environment" and not to be specified. Maybe some people saw it that way, and so decided not to care whether there were default handlers or not, but all of the points I've mentioned recently were considered and were factors when we designed the way handlers were established and when we discussed having a class of continuable errors. A number of default treatments for various conditions were discussed as various times as well. Moreover, Greg pointed out that default handlers had been useful in Le Lisp, using some of the same examples we're now hearing again. We explicitly considered this very stuff when designing the condition system. > Since then our ideas on threads have crystallized significantly and I > think this has some bearing on the matter. The proposal talks of a > single global handler, but I suspect that is going to be inconvenient > in a parallel world. We have also edged towards the recognition of a > per-thread handler in the treatment of unhandled conditions on threads > (the aborted state). So it seems to me that we have already almost > accepted default-handlers---we just do not provide access to them and > we still need not, although we can define additional default behaviour > for them. There is no question that people sometimes come close to intriducing default handlers or something like them. When I notice this, I argue against it. The treatment of unhandled conditions is explicitly not a default handler. (At least it wasn't. I suppose someone might have changed it when I wasn't looking :->.) BTW, I can't figure out what signalling a thread is supposed to do from the 0.99 definition. Can someone tell me how it's supposed to work? Also, why is it right to use signal for this purpose (apart from the name "signal", that is)? > Assuming we did agree to the handler + gf approach there are a number > of nasty beasties lurking: So far I haven't seen much in the way of argument _for_ that approach. I think it's wrong to design by example, so that we suddenly have to have something just because some example requires it. There are millions of cases like that. If we're going to use examples, we need reasons to suppose the examples are important enough to justify adding a new feature that may introduce a number of unexpected complications. If someone wants to count the number of chars that are output, they can do this without handlers by defining a new class. If it's important to change how existing classes work, maybe it's a fault in TELOS that we can't. It might be better if each file simply contained fill and flush functions in slots, rather than using TELOS. (If it really is important to change how existing classes of streams work.) If it's argued that handlers let us change all streams dynamically, that's true, but it's also true of many other operations for which no one is suggesting that generics be paired with conditions. I don't see why buffer filling and flushing is such a special case. > My preferred solution is for there to be a single gf for each of fill > and flush which each thread uses (therefore any new methods > potentially affect all threads) and that we state the existence of a > default-handler (a generic function) for each thread which, by > default, has a method for fill-buffer and flush-buffer which call fill > and flush respectively. Why is this now per-thread? This is becoming extremely complex, and I like it less and less the more I hear. There may be some cases where different threads want different handlers, but there will also be cases where they will all want the same one. So we'll have two levels of default, or soem kind of defaulting to parent threads, or who knows what. re: URLs > It sounds cute, but I'd like to leave it out of the definition for > now---we can provide it as an add on in an implementation by > subclassing---so perhaps we do need an abstract class for filename. That's just how I feel about default handlers. They sound cute, but I'd leave them out. We can add them later if its really necessary. -- jeff * Streams :From: Jeff Dalton > Just a couple points, maybe more later... > We use the conditions for all sorts of interesting hacks. For > example, our pretty printer is based on trapping the flush-buffer > condition and not calling the gf in certain cases. (The idea is, > first you try to print a whole expression on one line. If that fails > -- ie, if flush-buffer is called while printing the expression b/c > right margin is passed -- then take the subexpressions and print them > one per line, indented appropriately.) It's possible to pretty-print without this, of course, and it may even be better to do it another way. > This is not exactly leading the field; it's getting back to the > complicated pathname mechanism. What we like about filenames vs. > pathnames is that they encapsulate simple strings and use simple > string processing operations to extract and combine information; these > operations are based on pre-existing Unix commands. I still don't know what you're thinking of here. basename? > They don't have > slots, and are not mutable. They also have a fixed syntax across > OS's. This eliminates most of the major hassle of using CL-style > pathnames. If all of these fields are needed, perhaps filenames are > the wrong abstraction for EuLisp. What are the hassles associated with having slots? What are the hassles with using CL-style pathnames? (I'm not saying there aren't any, of course; but I'd like to know what significant problems you think we're avoiding so that i can tell how much no slots (for example) buys us.) -- jd * Streams :From: Jeff Dalton > Basically we perceived the same consensus as Richard summarised, ie character > or file streams with specific operations, an abstract stream class, and > generic stream operations with methods for character streams. What about Richard's ustreams? I would like us to use buffers, since this is a good answer to various efficiency problems. POSIX compatibility and fill/flush handlers are both separate issues. I'm in favor of going some way in the POSIX direction, but I don't think we need to go very far in the basic system (as opposed to a POSIX-oriented library). > The default handler approach to buffer fill and flush operations is > attractive because it gives the programmer two orthogonal techniques: > they can put methods on the generic function (essentially a `static' > approach at level 0) or they can introduce new handlers (a `dynamic' > approach). Sure, it gives these two techniques. But that's a description, not a reason. (After all, it isn't automatically follow that it's better to have two orthogonal techniques.) What does this dual approach give us that's (a) important to have and (b) not available otherwise? Until now, we've been happy to define new stream classes in order to get new behavior. I think we already win by being able to do this. Most other languages can't, or can't do it as well. Now, if this isn't good enough, it looks to me like a rather general problem. Surely streams won't be the only case where the object system doesn't give us all the flexibility we want. So a general, and perhaps difficult, question is being raised. Moreover, default handlers are a general facility: surely we're not talking about having a total special case just for two stream conditions. This again raises a number of general, and perhaps difficult, questions. We haven't had anything like a sufficient discussion of these issues. I've already said that I'm not directly opposed to having default handlers or to having protocols that involve signalling conditons and then continuing. But I don't think default handlers are so trivial that we can just add them on the fly as a side effect of a proposal about streams. -- jd * FFI proposal :From: Jeff Dalton > Perhaps the C syntax should mention a module and a function name to > allow more implementation possibilities. The restriction would be > that the module named must the one defining the function in question, > rather than one which might import the function (and possibly rename > it locally). Why? Indeed, if a module packages up various things, why should I ever have to know where they came from originally? -- jd * FFI proposal :From: Jeff Dalton > A few comments on the FFI. > > I don't think it's reasonable to include anything in the language that > requires a conservative GC. I agree, even if (as harley claims) cnservative GCs are carrying the day. We should have a Lisp object that represents a void *, with conversions both ways, just as we might do for Lisp and C ints. > The conversion of streams to file descriptors only makes sense for > POSIX systems. We should say that it produces an integer under POSIX, > but may produce something else in other systems. On the Lisp side, it may not ever have to be an integer. > I don't think the lack of computed access to functions by name is a > serious problem. It certainly doesn't require a general ability to access all functions by name. For instance, the programmer might call something on the Lisp side to add to a name-to-function table. (Richard notes this as well, I see.) > Many implementations will have such access > internally (at least for exported functions). Others (eg those in > which all module linking is done statically) may have to provide some > kind of linker. It will be interesting to see how many retain their sanity after dealing with a.out files. I think I'd be inclined to have the Lisp system write some extra C instead. > If you want to resolve it in the language, there could > be a module syntax for exporting functions to foreign languages or a > dynamic mechanism for puttint lisp functions into a table that could be > accessed from C. If there's a module syntax for exporting foreign-callable functions, then presumably built-in functions wouldn't be foreign-callable by defult. Ok, so someone could export tehm from another module in thew required way. This conflicts with Harley's restriction that the function be referred to (on the C side) in its original module. -- jeff * Conditions :From: Jeff Dalton I apologize for sending so long a message, but I think this is an important issues, and I want to make my position clear. Today I decided to take a closer look at what the 0.99 definition said about conditions, and I now think part of our problem may be that the definition and EuLisp have diverged. This may seem a strong way of putting it, but I think it's actually right, as I will try to explain. We make decisions on, roughly, a consensus basis. By this I mean that we all have to agree. Of course, we could all agree to accept the result of a majority vote; but such votes are not automatically sufficient, as we confirmed at the Bath meeting last year when such a vote was overturned. I think that consensus is the best way to proceed if we want the design of Eulisp to be a friendly, cooperative process that results in a language we're all reasonably happy with. For one thing, it lets us avoid having decisions made by procedural technicalities and the like. That is, we don't end up saying "you weren't there when we took this vote, so you lose" and things like that. Now, a consequence of this way of doing things is that the burden of proof is on the person proposing a change. Disagreement results in no change. This is reasonable, because it means that an earlier agreement, which was also reached by consensus, stands. However, it does mean that someone can block agreement in a case where they feel strongly enough. In most cases, this doesn't happen. I've always thought that the best way to get EuLisp ready for publication is to establish a consensus version, check the definition carefully, and then print it. I think 0.6 was pretty much a consensus version, and some more recent versions have been as close. But other versions contained new ideas about which we hadn't yet agreed, or had other problems, so that they didn't represent a consensus on what EuLisp should be. We rely on Julian to turn our ideas about what Eulisp should be into a definition, and he does an excellent job. However, the rest of us still have a responsibility to check, from time to time, that the definition still says what we think it should. Different people are concerned with different things. For instance, I'm not very concerned with the details of the TELOS MOP, and I rely on other people to make sure the definition is correct. However, I have been concerned with the details of the condition system. Since I've been to almost all the meetings and read the minutes and other notes that come out of the meetings I don't attend, I think I have a pretty good idea of what we've decided about conditions. But I'm afraid I haven't paid as much attention to the definition as I should. I think that 0.96, which we discussed at the Bath meeting, was pretty close to a "consensus version" of EuLisp. However, it had already diverged a bit from what we agreed about conditions, and 0.99 has diverged further. I suspect that this came about through the work on threads. Now, the right thing to do when work in one area suggests a change in another is to explicitly consider this change. I don't remember any explicit discussion of some of these changes, and if it had occurred when I was present, or been reported in something I'd read, I'd have disagreed. At this point, I can imagine some people thinking "if we have to get Jeff to agree before we can change how conditions work, it'll be really hard to change anything about conditions." To that I would say two things. One is that I will agree if people think it's important enough to make the change or if I don't particularly mind the change myself. I've already agreed to lots of things in EuLisp, and even in the condition system, that I'd do differently if I were designing the language by myself. The other thing I'd say is this: it should be hard to make changes when we don't agree, especially if we did agree on the version that some are trying to change. It's very difficult for me to get some changes made (e.g. the syntax of method definitions in a defgeneric and various things about modules), for instance, and I think this is right. After all, we did in effect decide to have things the way they are now, and so the burden of proof is on me if I want them changed. Moreover, I don't think we should end up in a situation in which its trivial for some people to change the language and extremely difficult for others. The difficulty should depend on what we've already agreed and on how good are the reasons for change. Consequently, I think that the definition should change to better reflect what we agreed about conditions and that threads will have to work with that version of conditions unless we agree to change it. I don't claim the condition system we designed is the best possible. I'm not sure that "best" is well-defined in this case, but there may well be other systems that are equally reasonable. Certainly, this is the sort of issue where reasonable people can reasonably disagree. So I don't plan to try to convince anyone that this is the best system. Instead, I'll make the case I just made, namely that we ought to work by consensus and that the burden of proof is on those who want to change. Now, the main way in which the definition and the agreed language have diverged is that in 0.99 it's said that "SIGNAL should never return". In fact, when we designed the condition system, signal was explicitly supposed to return when all handlers "declined" the condition. Since then, to judge by recent versions, the definition has been moving away from that model to one in which SIGNAL is much closer to ERROR. 0.96 also said that SIGNAL shouldn't return. However, 0.96 was inconsistent on this point, because it also included the code fragments that explained how ERROR and CERROR worked, and in both of those code fragments something happens when SIGNAL returns. Hence a return is possible. The code fragments were either written by me or are very closely related to some that were written by me, and I have a clear memory of how we intended these things to work. In the model used in the EuLisp condition system (but not in 0.99), SIGNAL is a primitive used to indicated that a certain situation has occurred. SIGNAL fires a blue rocket, and handlers can respond if they want, but SIGNAL does not itself say that execution cannot continue without the intervention of a handler. To say *that*, one calls ERROR. CERROR is similar to ERROR but provides one way to continue. (Unlike Common Lisp, EuLisp doesn't provide any direct support for cases in which there's more than one way to continue.) The terminology "signalled by ERROR" or "signalled as an error" can be used for cases when ERROR is called (ie, when SIGNAL is called by ERROR). When ERROR calls SIGNAL, and SIGNAL returns, that means that no handler decided to handle the condition (ie, that all declined). ERROR then does something implementation-specific. Maybe the debugger is invoked. Maybe the whole program exits. The idea is that in this case normal processing cannot continue. That's what calling ERROR means. Note that it's the code that signals the condition that knows whether execution can continue or not, and that it's possible to signal a condition without requiring that it be handled. Both of these were important factors in the design. I will send a second message in which I'll describe the implications for threads. -- jeff * Streams :From: Harley Davis Date: Fri, 17 Dec 93 22:49:46 GMT :From: Jeff Dalton > Just a couple points, maybe more later... > We use the conditions for all sorts of interesting hacks. For > example, our pretty printer is based on trapping the flush-buffer > condition and not calling the gf in certain cases. (The idea is, > first you try to print a whole expression on one line. If that fails > -- ie, if flush-buffer is called while printing the expression b/c > right margin is passed -- then take the subexpressions and print them > one per line, indented appropriately.) It's possible to pretty-print without this, of course, and it may even be better to do it another way. Of course it's possible to do it n other ways. Some may produce better looking output. However, this is the simplest pretty printer I have ever seen, and the only one which uses backtracking in this way without generating intermediate strings, so I thought it was an interesting application of these conditions. > This is not exactly leading the field; it's getting back to the > complicated pathname mechanism. What we like about filenames vs. > pathnames is that they encapsulate simple strings and use simple > string processing operations to extract and combine information; these > operations are based on pre-existing Unix commands. I still don't know what you're thinking of here. basename? Yes, basename. Sun-OS also has dirname. The "extension" function fits in well with basename and merge-filenames but it is an invention. "device" too. > They don't have > slots, and are not mutable. They also have a fixed syntax across > OS's. This eliminates most of the major hassle of using CL-style > pathnames. If all of these fields are needed, perhaps filenames are > the wrong abstraction for EuLisp. What are the hassles associated with having slots? What are the hassles with using CL-style pathnames? (I'm not saying there aren't any, of course; but I'd like to know what significant problems you think we're avoiding so that i can tell how much no slots (for example) buys us.) An implementation could use slots for filenames (although I can say from experience that this probably isn't the best approach), but the fact that filenames aren't mutable eliminates the possibility of several subtle bugs which I've seen with pathnames in which the pathname is modified but code assumes it still names the same file. As far as pathname slots, the one that bugs me is directory, which is a list of strings and/or wildcard directives. I find that code which needs to work with directories is hairy even though it usually only does simple things. I felt a great sense of lightness and aesthetic pleasure after having replaced all the pathname code in Talk with filename code; the result was inevitably more readable, clearer, and shorter. It also allocates less and is faster, but this is purely implementational. All of this is not to criticize pathnames; they have the very specific goal of being portable across a wide range of OS's, while filenames are specifically targeted to Unix and sufficiently similar systems. This fact lets them be simpler. If it is important to be portable to VMS, Symbolics, ITS, TOPS-20, and other relics, than pathnames would certainly be the right choice. However, today's world (especially for EuLisp, it would seem) is Unix, Windows (& MS/DOS), maybe Mac-OS, and few others. -- Harley * FFI proposal :From: Harley Davis Date: Fri, 17 Dec 93 23:11:52 GMT :From: Jeff Dalton > Perhaps the C syntax should mention a module and a function name to > allow more implementation possibilities. The restriction would be > that the module named must the one defining the function in question, > rather than one which might import the function (and possibly rename > it locally). Why? Indeed, if a module packages up various things, why should I ever have to know where they came from originally? I was just trying to think of various potential implementation problems, but I guess I really shouldn't care. Do you agree however that the name must be exported from the module? -- Harley * FFI proposal :From: Jeff Dalton > > Perhaps the C syntax should mention a module and a function name to > > allow more implementation possibilities. The restriction would be > > that the module named must the one defining the function in question, > > rather than one which might import the function (and possibly rename > > it locally). > > Why? Indeed, if a module packages up various things, why > should I ever have to know where they came from originally? > > I was just trying to think of various potential implementation > problems, but I guess I really shouldn't care. Do you agree however > that the name must be exported from the module? That sounds reasonable, but I could also imagine a module that included both C and Lisp routines, and then it would make sense for them to be able to call each other just as Lisp routines could; and in that case the Lisp funs wouldn't have to be exported. However, that approach would presumably require some mechanism that said which Lisp functions were callable from C. It's simpler, at least, to use module exports for this purpose instead. In that case, would all "built-in" functions be C callable? Anyway, I'm reasonably happy for this to go either way, though using modules seems more in the spirit of EuLisp. -- jeff * FFI proposal :From: Harley Davis In article Jeff Dalton writes: > A few comments on the FFI. > > I don't think it's reasonable to include anything in the language that > requires a conservative GC. I agree, even if (as harley claims) cnservative GCs are carrying the day. We should have a Lisp object that represents a void *, with conversions both ways, just as we might do for Lisp and C ints. I have no problem with this; the
type in our proposal serves exactly this function. I just wanted to throw out the ptr idea for discussion. > I don't think the lack of computed access to functions by name is a > serious problem. It certainly doesn't require a general ability to access all functions by name. For instance, the programmer might call something on the Lisp side to add to a name-to-function table. (Richard notes this as well, I see.) Or there could be a defining form for foreign-callable functions, like extern "C" in C++. (OK, extern "C" defines a scope, but it's the same idea.) > If you want to resolve it in the language, there could > be a module syntax for exporting functions to foreign languages or a > dynamic mechanism for puttint lisp functions into a table that could be > accessed from C. If there's a module syntax for exporting foreign-callable functions, then presumably built-in functions wouldn't be foreign-callable by defult. Ok, so someone could export tehm from another module in thew required way. This conflicts with Harley's restriction that the function be referred to (on the C side) in its original module. I do think it would be a shame if it was necessary to declare in Lisp that some Lisp function was callable from C, but it has precedent. -- Harley * Streams :From: Jeff Dalton > It's possible to pretty-print without this, of course, and it may > even be better to do it another way. > > Of course it's possible to do it n other ways. Some may produce > better looking output. However, this is the simplest pretty printer I > have ever seen, and the only one which uses backtracking in this way > without generating intermediate strings, so I thought it was an > interesting application of these conditions. I agree that it's an interesting application. But I don't think it follows that we should therefore have these conditions. In any case, BTW, the conditions should have names that sound like conditions rather than requests, e.g. rather than . Also BTW, the EuLisp condition system (as described in my message of yesterday and as distinct from the system described in 0.99) can be extended to have default handlers. But it can also handle this sort of signal without default handlers, because SIGNAL doesn't require that the condition be handled. So instead of something like this: (let/cc continue (signal (make ...) continue)) you'd have something like this: (let/cc continue (signal (make ...) continue) ;; Here if no handler. So call fill-buffer if here it's normally ;; handlers that call it. ) ;; Call fill-buffer here if the protocol says handlers don't ;; call it. > I still don't know what you're thinking of here. basename? > > Yes, basename. Sun-OS also has dirname. The "extension" function > fits in well with basename and merge-filenames but it is an invention. > "device" too. Ok. No problem. What I really found hard to grasp about merge-filename was the rules for how things were combined. Sometimes it's defaulting, sometimes an exclusion rule, etc. > > They don't have > > slots, and are not mutable. They also have a fixed syntax across > > OS's. This eliminates most of the major hassle of using CL-style > > pathnames. If all of these fields are needed, perhaps filenames are > > the wrong abstraction for EuLisp. > > What are the hassles associated with having slots? What are the > hassles with using CL-style pathnames? (I'm not saying there aren't > any, of course; but I'd like to know what significant problems you > think we're avoiding so that i can tell how much no slots > (for example) buys us.) > > An implementation could use slots for filenames (although I can say > from experience that this probably isn't the best approach), but the > fact that filenames aren't mutable eliminates the possibility of > several subtle bugs which I've seen with pathnames in which the > pathname is modified but code assumes it still names the same file. I'm happy for pathnames to be immutable. In fact, i think it's a good idea. > As far as pathname slots, the one that bugs me is directory, which is > a list of strings and/or wildcard directives. I find that code which > needs to work with directories is hairy even though it usually only > does simple things. One could have a list of strings without allowing wildcards, of course. I don't think the code would be too bad, and it ought to be more efficient than repeatedly parsing strings. > I felt a great sense of lightness and aesthetic > pleasure after having replaced all the pathname code in Talk with > filename code; the result was inevitably more readable, clearer, and > shorter. It also allocates less and is faster, but this is purely > implementational. Ok. I don't mind having filenames (in this sense) instead of pathnames. I just wanted to have a better idea of what difference it makes. > All of this is not to criticize pathnames; they have the very specific > goal of being portable across a wide range of OS's, while filenames > are specifically targeted to Unix and sufficiently similar systems. There are also the logical pathnames, and so forth. pathnames might also be easier to extend with additional fields (maybe), e.g. for WWW-type stuff. -- jeff * FFI proposal :From: Harley Davis In article Jeff Dalton writes: > Why? Indeed, if a module packages up various things, why > should I ever have to know where they came from originally? > > I was just trying to think of various potential implementation > problems, but I guess I really shouldn't care. Do you agree however > that the name must be exported from the module? That sounds reasonable, but I could also imagine a module that included both C and Lisp routines, and then it would make sense for them to be able to call each other just as Lisp routines could; and in that case the Lisp funs wouldn't have to be exported. However, that approach would presumably require some mechanism that said which Lisp functions were callable from C. It's simpler, at least, to use module exports for this purpose instead. In that case, would all "built-in" functions be C callable? I would like all built-ins to be C callable, and as I said previously I think it would be somewhat annoying, and perhaps redundant, to oblige programmers to specify in Lisp which fns are callable. The module approach seems simpler; I would only hesitate if someone said it would cause implementational problems when modules are implemented using some standard implementation technique. In fact, I would be curious to know if this approach could work with FEEL and APPLY. It doesn't cause any such problems for us, which is why we adopted that approach. -- Harley * Conditions and threads :From: Jeff Dalton I hope people are willing to give the EuLisp condition system (as I've called it, somewhat contentiously) a chance. I think it can do the things we're talking about doing with streams and threads reasonably well. I said something about streams in a reply to Harley earlier today. This link isn't reliable enough for me to send much on threads right now, but I'll try to say something. As I indicated in my message on conditions, SIGNAL is a primitive used to indicated that a certain situation has occurred, but SIGNAL does not itself say that execution cannot continue without the intervention of a handler. To say *that*, one calls ERROR (or CERROR, but from now on I'll just talk about ERROR. CERROR is the same except that it provides another thing that handlers can do -- a continuation they can call.) So calling SIGNAL would not abort the current thread when all handlers (in the dynamic env of the thread) decline, but calling ERROR could. However, when all handlers decline a condition signalled by ERROR, normal processing cannot continue (because that's what calling ERROR means), so the normal thing to do in an interactive system would be to enter the debugger. Moreover, it's best to enter the debugger now rather than save up the condition for when someone calls THREAD-VALUE, because the context needed to understand what went wrong is still available. This means that ERROR wouldn't automatically abort the thread. However, there are cases when you want to pass conditions on to whoever calls THREAD-VALUE. A handler can be use for this purpose, but it would need a procedure, such as ABORT-THREAD, that it could call. BTW, when someone does call THREAD-VALUE, I don't think they should just get the same condition signalled on them. I think they should get a condition of class (or maybe some name with "thread" in it) that contains the condition that wasn't handled. Also BTW, section 11.2.3.3 (remarks about LOCK), and other sections that look like they might discuss this, are not very clear on the question of a thread that calls LOCK getting a condition from another thread. How does this work? When does it happen? Etc. -- jeff * Streams :From: Harley Davis Also BTW, the EuLisp condition system (as described in my message of yesterday and as distinct from the system described in 0.99) can be extended to have default handlers. But it can also handle this sort of signal without default handlers, because SIGNAL doesn't require that the condition be handled. So instead of something like this: (let/cc continue (signal (make ...) continue)) you'd have something like this: (let/cc continue (signal (make ...) continue) ;; Here if no handler. So call fill-buffer if here it's normally ;; handlers that call it. ) ;; Call fill-buffer here if the protocol says handlers don't ;; call it. I think this points out a flaw with the EuLisp condition system. The way the system is defined, I can't call the next handler except by returning from my own handler. However, in practice I often want to call the next handler, and then continue (assuming the next handler returns). I prefer a form like call-next-handler rather than current behavior of calling the next handler when the current handler returns. I would prefer having a handler function return to mean just that - the handler returns, and execution continues from either the point that called call-next-handler, or from the call to signal if the first handler returns. In Talk, 19% of all handler methods call call-next-handler. Another interesting data point: In Talk, out of 86 condition classes, only 4 are error conditions; the rest are used by the development environment or the I/O system to signal events where continued execution is expected after the signal. We have found almost no use for detailed error classes for particular errors. I would be interested to know of other similar statistics in FEEL or CommonLisp. (In CL, I would like to see statistics for non-documented, implementation-specific condition classes.) -- Harley * Streams :From: Dave De Roure > > However, I don't think it is really problematic to just specify the > > gf's and leave the conditions as an implementation approach, > > I think we should do this. The conditions could be compatibly added > later. If, as Julian suggests, there is some interaction with streams > it makes me even more inclined to go with generic functions. I decided to do that, but it was pointed out to me that the programmer should know if the conditions are going to be signalled - otherwise they might find handlers being called more often than they intend. We can argue that handlers only handle specified conditions so the extra calls are harmless, but if the programmer creates an `unwind-protect' situation they need to know. But since nobody else has mentioned this, maybe it's not a real concern? > > Q. Should we support seek and unget operations? > > > > We decided not to support them because the interaction with buffering > > is hard to explain. > > Surely we have to support seek? I don't mind losing unget, since we > have peek-char (though I would have thought an advantage of the buffer > approach was that it was relatively easy to allow unlimited pushback). > C doesn't have any trouble with fseek(), so surely we can do it. Julian - to fit streams with collections, do we need seek? It seems to me that files (rather than stdin etc) fit the collection abstraction well (they have length, you can do ref/seek operations), while both file and stdin-type streams fit the iteration protocol without using seek. > > Q. Should we support combined i-o streams in the definition? > > > > No need; all streams in this proposal support both I/O. > > I'm happy with this. Me too. Nobody has expressed a wish to retain the old combined streams idea, so I'll take it that this is agreed. > > Finally, I have some sympathy with Richard's comment about URLs. > > > This is not exactly leading the field; it's getting back to the > > complicated pathname mechanism. > > You're probably right about this. It would be nice if they just > fitted in to the scheme, but I don't think we should worry about it at > this stage. OK, shall I adopt ilogtalk filenames for now then? -- Dave * Streams :From: Dave De Roure > > Basically we perceived the same consensus as Richard summarised, ie character > > or file streams with specific operations, an abstract stream class, and > > generic stream operations with methods for character streams. > > What about Richard's ustreams? I don't know. At the meeting we agreed to look at the ilogtalk ones and see how they integrate with EuLisp, which is what Julian and I spent some time on at GMD and is what I am now pursuing via this email discussion, with a view to distributing a draft of a revised streams section of the definition as soon as possible. Unfortunately this is already more than enough work for me to do, because I'm tied up full time with another project at the moment - I can't really take on an investigation of another solution. But anyone else is welcome to do so! > I would like us to use buffers, since this is a good answer to various > efficiency problems. POSIX compatibility and fill/flush handlers are > both separate issues. I'm in favor of going some way in the POSIX > direction, but I don't think we need to go very far in the basic > system (as opposed to a POSIX-oriented library). I agree. > > The default handler approach to buffer fill and flush operations is > > attractive because it gives the programmer two orthogonal techniques: > > they can put methods on the generic function (essentially a `static' > > approach at level 0) or they can introduce new handlers (a `dynamic' > > approach). > > Sure, it gives these two techniques. But that's a description, not > a reason. (After all, it isn't automatically follow that it's better > to have two orthogonal techniques.) What does this dual approach give > us that's (a) important to have and (b) not available otherwise? I believe that anything you can do with one you can also do with the other, with a bit of hacking. I found the dual approach attractive because it seemed to make the language more expressive at minimal cost, but now that it's become apparent that this opens a can of worms (aka default handlers) then I think we should leave it to Eulisp 2. -- Dave * Streams :From: Dave De Roure Sorry, I'm going back to an earlier question. Do we wish to include object streams (aka fifo streams in an earlier proposal) in the definition? Just to remind you, object streams are a subclass of streams (like file-streams, aka character-streams) with methods on the generic io operations; they behave as queues and are a useful unifying feature because they enable the streams protocol to work with general collections. I believe that Julian supports the inclusion of object-streams, and I am happy to do so as well. Does anyone have any strong feelings on this? If I include them in the revision, we can always exclude them later. -- Dave * Streams :From: Harley Davis Sorry, I'm going back to an earlier question. Do we wish to include object streams (aka fifo streams in an earlier proposal) in the definition? Just to remind you, object streams are a subclass of streams (like file-streams, aka character-streams) with methods on the generic io operations; they behave as queues and are a useful unifying feature because they enable the streams protocol to work with general collections. I believe that Julian supports the inclusion of object-streams, and I am happy to do so as well. Does anyone have any strong feelings on this? If I include them in the revision, we can always exclude them later. I think you have missed the crucial point in this stream proposal. The whole idea is that streams are not classified according to whether they do input or output, or by the type of object which can be input/output, but rather by the physical entity to which they are connected. Thus your earlier confusion that some streams might do i, others o, and still others i/o -- and the current confusion that there needs to be a special stream for "objects" and another for "characters". But there is no such distinction in this proposal. All streams can do character i/o (using read-char and print-char, or looking directly at the buffers), and all of them can do object i/o (using read/print/printf). File-streams are called file-streams rather than character-streams because they are attached to and buffered via files. They can do i/o at both the character and the object level. Similarly, if you want a new, in-memory type of stream, you should specify to what it is attached for buffering: a vector of strings, a list of strings, a single string, or something else. This is the whole point of the proposal: it removes artificial distinctions which are unnecessary, and preserves only those distinctions which actually make a difference. It does this by describing a single mechanism by which both the high-level object-based and high-level character-based operations are translated into lower-level buffer-based operations, and by describing a common protocol for the buffer operations which can be efficiently implemented for a variety of real data sources and sinks. What distinguishes one stream class from another is the way it implements this protocol -- in practice, the type of the object from which and to which the buffers are retrieve or output. Whether you do input vs. output, or character vs. object operations depends how you use a stream, rather than being an inherent characteristic of the stream itself. I hope this clarifies the intent of our proposal. -- Harley * Streams :From: Dave De Roure > I think you have missed the crucial point in this stream proposal. I don't think so :-) > Thus your earlier confusion that some streams might do i, > others o, and still others i/o -- and the current confusion that there > needs to be a special stream for "objects" and another for > "characters". The distinctions come from adapting what we already had rather than starting completely afresh. But it looks like we should be adopting your proposal as it is rather than bending what we have to meet it. That's my fault for trying to edit existing code and definition rather than starting from scratch - this is a case for adopting not adapting! > Similarly, if you want a new, in-memory type of stream, you should > specify to what it is attached for buffering: a vector of strings, a > list of strings, a single string, or something else. Yes, I can see that an object stream is a stream with some kind of (non-string) collection as buffer, and what we'd need in the definition is the methods to support this. But this isn't what came out of the discussions that Julian and I had at GMD - we definitely had a separate object-stream class (at least according to my notes), so that's what I've been pursuing. I can't remember the rationale - I think I need some input from Julian here... BTW If we have a queue class and use this as a buffer, we have a stream that preserves eq-ness. The string buffers, with their encoding and decoding, presumably don't? Comments... > This is the whole point of the proposal: it removes artificial > distinctions which are unnecessary, and preserves only those > distinctions which actually make a difference. It does this by > describing a single mechanism by which both the high-level > object-based and high-level character-based operations are translated > into lower-level buffer-based operations, and by describing a common > protocol for the buffer operations which can be efficiently > implemented for a variety of real data sources and sinks. What > distinguishes one stream class from another is the way it implements > this protocol -- in practice, the type of the object from which and to > which the buffers are retrieve or output. Yeah, that's why it's such a cool proposal. But the number of times I've come away from meetings with a new streams proposal and then had trouble progressing it to everyone's satisfaction makes me wonder whether I'm the right person to be doing this... At least this one has facticity :-) -- Dave * Conditions and threads :From: Keith Playford > :From: Jeff Dalton > Date: Sun, 19 Dec 93 15:34:56 GMT > > [...] > > BTW, when someone does call THREAD-VALUE, I don't think they should > just get the same condition signalled on them. I think they should > get a condition of class (or maybe some name > with "thread" in it) that contains the condition that wasn't handled. For what it's worth, I agree with this. Transparently delegating a condition's handling to dependent threads feels like an assumption too far about the model within which threads are going to be employed. Transparent delegation should be simple to implement but probably not the default. In fact, yes, entering the debugger would make for the most useful default behaviour. Now, if I had pre-packaged and libraries to choose from too... -- Keith [Sorry if I've misunderstood from the mail - I've not up with the latest definition.] * Streams :From: Harley Davis BTW If we have a queue class and use this as a buffer, we have a stream that preserves eq-ness. The string buffers, with their encoding and decoding, presumably don't? Comments... I think that streams should all equally preserve (or not) identity so that all classes of them can be used in any context which accepts a stream. A queue would thus not be a stream, and a queue class should use a protocol other than print/read to pass objects throught the queue. Streams could conceivably fall in the class hierarchy somewhere other than directly under , but I think print/read should have specific meaning vis a vis the buffers, which are necessarily strings. -- Harley * Streams :From: Richard Tobin > Sorry, I'm going back to an earlier question. Do we wish to include > object streams (aka fifo streams in an earlier proposal) in the > definition? Aren't they rather trivial for the user to define? I'd be inclined to leave them out and quote them as an example of how easy it is to extend the stream system. -- Richard