Discussion:
Sean E. Russell
2004-03-23 16:29:30 UTC
Hello,

I posted this question to comp.lang.functional, and someone suggested that I

I'm struggling with monads. Well, not monads themselves, but mixing them with

Here's my base case:

someFunc :: String -> IO [a]
...
ax <- someFunc a
bx <- someFunc b
assertBool "fail" $length ax == length bx I don't like the assignments; the typing is redundant, if I have a lot of asserts like this, and the "variables" are temporary. What I'd much rather have is: ... assertBool "fail"$ (length $someFunc a) == (length$ someFunc b)

which is more readable, to my eye.

The only solution which has been suggested that may work is liberal use of the
liftM variants, but this gets *really* tedious and obtuse.

Is there an elegant way to do what I want to do, or am I stuck with
procedural-style assignments and bunches of temp vars?

Thanks!
--
### SER
### Deutsch|Esperanto|Francaise|Linux|XML|Java|Ruby|Aikido
### http://www.germane-software.com/~ser jabber.com:ser ICQ:83578737
### GPG: http://www.germane-software.com/~ser/Security/ser_public.gpg
Graham Klyne
2004-03-23 17:39:57 UTC
I think you're a rather stuck with the "temporary variables" (which they're
not really), but it might be possible to hide some of the untidiness in an

Assuming this function is given:

assertBool :: String -> Bool -> IO ()

...

My first stab would be:

assertBool1 :: IO [a] -> IO [a] -> ([a] -> [a] -> Bool) -> IO ()
assertBool1 f1 f2 comp = do
a1 <- f1
a2 <- f2
assertBool "fail" $comp a1 a2 Then your main code could be: assertBool1 (someFunc a) (someFunc b) (==) ... So far so good, maybe? But what happens if the values you want to test are not lists? The assertBool1 could be generalized somewhat: assertBool2 :: IO a -> IO b -> (a -> b -> Bool) -> IO () assertBool2 fa fb comp = do va <- fa vb <- fb assertBool "fail"$ comp va vb

(All that's really changed here is the type signature)

...

But what if you want a test that uses just one value, or three, or
more? At this point I think you start having to use the liftM variants,
but others may have better ideas. It may be that the lifting can be hidden
in auxiliiarty function/operator definitions; e.g. see the Haskell library

...

Anyway, here's some complete code, tested under Hugs:
[[

assertBool :: String -> Bool -> IO ()
assertBool err bool = unless bool (error err)

someFunc :: String -> IO String
someFunc s = return s

assertBool1 :: IO [a] -> IO [a] -> ([a] -> [a] -> Bool) -> IO ()
assertBool1 f1 f2 comp = do
a1 <- f1
a2 <- f2
assertBool "fail" $comp a1 a2 test1 :: String -> String -> IO () test1 a b = do { assertBool1 (someFunc a) (someFunc b) (==) ; putStrLn "test1 OK" } -- test1 "a" "a" -> "test1 OK" -- test1 "a" "b" -> "fail" assertBool2 :: IO a -> IO b -> (a -> b -> Bool) -> IO () assertBool2 fa fb comp = do va <- fa vb <- fb assertBool "fail"$ comp va vb

test2 :: String -> String -> IO ()
test2 a b =
do { assertBool1 (someFunc a) (someFunc b) (==)
; putStrLn "test2 OK"
}
-- test2 "a" "a" -> "test2 OK"
-- test2 "a" "b" -> "fail"
]]

#g
--
Post by Sean E. Russell
Hello,
I posted this question to comp.lang.functional, and someone suggested that I
with
someFunc :: String -> IO [a]
...
ax <- someFunc a
bx <- someFunc b
assertBool "fail" $length ax == length bx I don't like the assignments; the typing is redundant, if I have a lot of asserts like this, and the "variables" are temporary. What I'd much rather ... assertBool "fail"$ (length $someFunc a) == (length$
someFunc b)
which is more readable, to my eye.
The only solution which has been suggested that may work is liberal use of
the
liftM variants, but this gets *really* tedious and obtuse.
Is there an elegant way to do what I want to do, or am I stuck with
procedural-style assignments and bunches of temp vars?
Thanks!
--
### SER
### Deutsch|Esperanto|Francaise|Linux|XML|Java|Ruby|Aikido
### http://www.germane-software.com/~ser jabber.com:ser ICQ:83578737
### GPG: http://www.germane-software.com/~ser/Security/ser_public.gpg
_______________________________________________
------------
Graham Klyne
For email:
http://www.ninebynine.org/#Contact
Sean E. Russell
2004-03-23 18:56:04 UTC
Post by Graham Klyne
I think you're a rather stuck with the "temporary variables" (which they're
not really), but it might be possible to hide some of the untidiness in an
That seems to be the common suggestion: write my own visitors.

I'm just surprised that there isn't a more elegant mechanism for getting
state of affairs just seems awkward.

[Warning: quasi-rant]

Caveat: I'm not smart enough, and I don't know enough, to criticize Haskell,
simple questions and I'm getting simple answers, I'm talking to God." I
simply mistrust, and therefore question, systems where simple things are
overly involved.

like an excuse to me. We have monads, because they allow side-effects. Ok.
If programs that used side effects were uncommon, I'd be fine with them being
troublesome -- but they aren't. Maybe it is just me, but my Haskell programs
invariably develop a need for side effects within a few tens of lines of
code, whether IO, Maybe, or whatnot. And I can't help but think that
language support to make dealing with monads easier -- that is, to integrate
monads with the rest of the language, so as to alleviate the need for
constant lifting -- would be a Good Thing.

Hmmm. Could I say that Haskell requires "heavy lifting"?
--
### SER
### Deutsch|Esperanto|Francaise|Linux|XML|Java|Ruby|Aikido
### http://www.germane-software.com/~ser jabber.com:ser ICQ:83578737
### GPG: http://www.germane-software.com/~ser/Security/ser_public.gpg
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: signature
Frederik Eaton
2005-09-08 05:24:37 UTC
Hi,

Sean's comment (yeah, it was like a billion years ago, just catching
up) is something that I've often thought myself.

I want the type system to be able to do "automatic lifting" of monads,
i.e., since [] is a monad, I should be able to write the following:

[1,2]+[3,4]

and have it interpreted as "do {a<-[1,2]; b<-[3,4]; return (a+b)}".

Also, I would have

The point I want to make is that this is much more general than IO or
monads! I think we all understand intuitively what mathematicians mean

{1,2}+{3,4} (i.e. { x+y | x\in {1,2}, y\in {3,4}})

(f+g)(x) where f(x)=x+1 and g(x)=x+4

So "automatic lifting" is a feature which is very simple to describe,
but which gives both of these notations their intuitive mathematical
meaning - not to mention making monadic code much tidier (who wants to
spend their time naming variables which are only used once?). I think
it deserves more attention.

I agree that in its simplest incarnation, there is some ugliness: the
order in which the values in the arguments are extracted from their
monads could be said to be arbitrary. Personally, I do not think that
this in itself is a reason to reject the concept. Because of currying,
you think of the proposed operation not as lifting, but as inserting
aps:

return f ap x1 ap ... ap xn

then the ordering problem doesn't seem like such a big deal. I mean,
what other order does one expect, than one in which the arguments are
read in the same order that 'f' is applied to them?

Although it is true that in most of the instances where this feature
would be used, the order in which arguments are read from their monads
will not matter; yet that does not change the fact that in cases where
order *does* matter it's pretty damn easy to figure out what it will
be. For instance, in

two lines are read and then printed. Does anybody for a moment
question what order the lines should be read in?

Frederik
Post by Sean E. Russell
Post by Graham Klyne
I think you're a rather stuck with the "temporary variables" (which they're
not really), but it might be possible to hide some of the untidiness in an
That seems to be the common suggestion: write my own visitors.
I'm just surprised that there isn't a more elegant mechanism for getting
state of affairs just seems awkward.
[Warning: quasi-rant]
Caveat: I'm not smart enough, and I don't know enough, to criticize Haskell,
simple questions and I'm getting simple answers, I'm talking to God." I
simply mistrust, and therefore question, systems where simple things are
overly involved.
like an excuse to me. We have monads, because they allow side-effects. Ok.
If programs that used side effects were uncommon, I'd be fine with them being
troublesome -- but they aren't. Maybe it is just me, but my Haskell programs
invariably develop a need for side effects within a few tens of lines of
code, whether IO, Maybe, or whatnot. And I can't help but think that
language support to make dealing with monads easier -- that is, to integrate
monads with the rest of the language, so as to alleviate the need for
constant lifting -- would be a Good Thing.
Hmmm. Could I say that Haskell requires "heavy lifting"?
--
### SER
### Deutsch|Esperanto|Francaise|Linux|XML|Java|Ruby|Aikido
### http://www.germane-software.com/~ser jabber.com:ser ICQ:83578737
### GPG: http://www.germane-software.com/~ser/Security/ser_public.gpg
_______________________________________________
--
http://ofb.net/~frederik/
Ben Lippmeier
2005-09-08 06:08:05 UTC
Post by Frederik Eaton
I want the type system to be able to do "automatic lifting" of monads,
[1,2]+[3,4]
and have it interpreted as "do {a<-[1,2]; b<-[3,4]; return (a+b)}".
two lines are read and then printed. Does anybody for a moment
question what order the lines should be read in?
Frederik,
To do "automatic lifting" you need to do a (higher-order) effect
analysis, then make sure the compiler doesn't rearrange applications
which have conflicting effects.

One way of preventing the compiler from rearranging effects is to thread
though a dummy variable - like a "World token", ala the IO monad - which
makes the order of operations explicit as an extra data dependency, then
compile the resulting code.

Another way is to use the effect information to lift the applications
into a hierarchy of monads which represent how effectful the application
is, then compile the monadic code directly. There's a paper by Andrew
Tolmach called "Optimizing ML using a hierarchy of monadic types", which
does exactly this.

Tolmach's approach worked ok, but there were some problems with higher
order functions.. ie with map :: (a -E> b) -> [a] -E> [b] where E is
some effect, you have to assume a worst case effect for the first
argument - so any expression using map can't be moved around by the
compiler - eg for the full laziniess transform.

Another way would be just to annotate every application with the effects
it has, then have the compiler check these before it tries to rearrange
anything - and have an extra rule that you can't suspend an application
which has visible effects.

I am working on a compiler for my PhD project which takes this third
option. I've got the effect analysis working, but I had to resort to a
graph based type inference method - which is something that wouldn't be
easilly added to something like GHC.

Onward!
Ben.
Frederik Eaton
2005-09-08 07:22:59 UTC
Post by Ben Lippmeier
Frederik,
To do "automatic lifting" you need to do a (higher-order) effect analysis, then make sure the
compiler doesn't rearrange applications which have conflicting effects.
Hrm, I disagree. I don't think this is what I was advocating in my
message.

What I'm advocating is a reasonably simple modification of the type
checker to allow a more concise syntax. Unless I'm misunderstanding
you, there is no special "effect analysis" needed.

I haven't worked it out exactly, but what you'd do is the following:

1. keep track of when you are unifying types within a "monadic
context"; for instance when you unify "Monad m => x -> m b" with
"Monad m => y -> m b", you have to unify "x" and "y". this second
unification of "x" and "y" will be done within a "context" to which
the monad "m" has been added, to make a note of the fact that
computations in "m" within "x" or "y" can be lifted.

2. if two types don't unify, but you can get them to unify by
inserting a lift operation from one of the current context monads,
then do that. e.g. when you find an application where a function
expects an argument of type "a" and the user is passing something
of type "m a", and "m" is in the context (so we know that we can
eventually get rid of it), then do the application with ap
instead of "$". I don't pretend that this is rigorous, but I do hope it gives a better idea of what I'm talking about doing. The point of the last few paragraphs of my message was to argue that even with this syntax change, users will still be able to easily reason about the side-effects of monadic parts of their code. Do you disagree with that assertion? Or are you just saying that the syntax change as I propose it is called "effect analysis"? Frederik -- http://ofb.net/~frederik/ Frederik Eaton 2005-09-08 08:19:40 UTC Permalink I guess what I don't understand is what's wrong with the first Post by Ben Lippmeier One way of preventing the compiler from rearranging effects is to thread though a dummy variable - like a "World token", ala the IO monad - which makes the order of operations explicit as an extra data dependency, then compile the resulting code. which sounds sort of the same as the semantics I'm envisioning. Frederik Post by Ben Lippmeier Post by Ben Lippmeier Frederik, To do "automatic lifting" you need to do a (higher-order) effect analysis, then make sure the compiler doesn't rearrange applications which have conflicting effects. Hrm, I disagree. I don't think this is what I was advocating in my message. What I'm advocating is a reasonably simple modification of the type checker to allow a more concise syntax. Unless I'm misunderstanding you, there is no special "effect analysis" needed. 1. keep track of when you are unifying types within a "monadic context"; for instance when you unify "Monad m => x -> m b" with "Monad m => y -> m b", you have to unify "x" and "y". this second unification of "x" and "y" will be done within a "context" to which the monad "m" has been added, to make a note of the fact that computations in "m" within "x" or "y" can be lifted. 2. if two types don't unify, but you can get them to unify by inserting a lift operation from one of the current context monads, then do that. e.g. when you find an application where a function expects an argument of type "a" and the user is passing something of type "m a", and "m" is in the context (so we know that we can eventually get rid of it), then do the application with ap instead of "$".
I don't pretend that this is rigorous, but I do hope it gives a better
idea of what I'm talking about doing. The point of the last few
paragraphs of my message was to argue that even with this syntax
change, users will still be able to easily reason about the
side-effects of monadic parts of their code. Do you disagree with that
assertion? Or are you just saying that the syntax change as I propose
it is called "effect analysis"?
Frederik
--
http://ofb.net/~frederik/
_______________________________________________
--
http://ofb.net/~frederik/
Jeremy Gibbons
2005-09-08 08:50:39 UTC
Post by Frederik Eaton
I want the type system to be able to do "automatic lifting" of monads,
[1,2]+[3,4]
and have it interpreted as "do {a<-[1,2]; b<-[3,4]; return (a+b)}".
You might want to take a look at "Monadification of Functional Programs"
by Erwig and Ren (Science of Computer Programming, 52:101-129, 2004):

http://web.engr.oregonstate.edu/~erwig/papers/abstracts.html

which describes the transformation that introduces such a monadic
structure.

Jeremy

***@comlab.ox.ac.uk
Oxford University Computing Laboratory, TEL: +44 1865 283508
Wolfson Building, Parks Road, FAX: +44 1865 273839
Oxford OX1 3QD, UK.
URL: http://www.comlab.ox.ac.uk/oucl/people/jeremy.gibbons.html
Keean Schupke
2005-09-08 09:15:45 UTC
Can't you do automatic lifting with a "Runnable" class:

class Runnable x y where
run :: x -> y

instance Runnable (m a) (m a) where
run = id

instance Runnable (s -> m a) (s -> m a) where
run = id

Runnable (t m a) (n a) where
run = run . down

where
run = down

Where:

up :: m a -> t m a
down :: t m a -> m a

For example for StateT:

downST :: Monad m => StateT st m a -> (st -> m a)
downST m = \st -> do
(_,a) <- runST m st
return a

downST' :: Monad m => (b -> StateT st m a) -> (st -> b -> m a)
downST' m = \st b -> do
(_,a) <- runST (m b) st
return a

-> m s) (st -> n s)) => Runnable (StateT st m s) (st -> n s)
where
run = run . downST

st m s) (st -> m s) where
run = downST

Keean.
Post by Frederik Eaton
Hi,
Sean's comment (yeah, it was like a billion years ago, just catching
up) is something that I've often thought myself.
I want the type system to be able to do "automatic lifting" of monads,
[1,2]+[3,4]
and have it interpreted as "do {a<-[1,2]; b<-[3,4]; return (a+b)}".
Also, I would have
The point I want to make is that this is much more general than IO or
monads! I think we all understand intuitively what mathematicians mean
{1,2}+{3,4} (i.e. { x+y | x\in {1,2}, y\in {3,4}})
(f+g)(x) where f(x)=x+1 and g(x)=x+4
So "automatic lifting" is a feature which is very simple to describe,
but which gives both of these notations their intuitive mathematical
meaning - not to mention making monadic code much tidier (who wants to
spend their time naming variables which are only used once?). I think
it deserves more attention.
I agree that in its simplest incarnation, there is some ugliness: the
order in which the values in the arguments are extracted from their
monads could be said to be arbitrary. Personally, I do not think that
this in itself is a reason to reject the concept. Because of currying,
you think of the proposed operation not as lifting, but as inserting
return f ap x1 ap ... ap xn
then the ordering problem doesn't seem like such a big deal. I mean,
what other order does one expect, than one in which the arguments are
read in the same order that 'f' is applied to them?
Although it is true that in most of the instances where this feature
would be used, the order in which arguments are read from their monads
will not matter; yet that does not change the fact that in cases where
order *does* matter it's pretty damn easy to figure out what it will
be. For instance, in
two lines are read and then printed. Does anybody for a moment
question what order the lines should be read in?
Frederik
Post by Sean E. Russell
Post by Graham Klyne
I think you're a rather stuck with the "temporary variables" (which they're
not really), but it might be possible to hide some of the untidiness in an
That seems to be the common suggestion: write my own visitors.
I'm just surprised that there isn't a more elegant mechanism for getting
state of affairs just seems awkward.
[Warning: quasi-rant]
Caveat: I'm not smart enough, and I don't know enough, to criticize Haskell,
simple questions and I'm getting simple answers, I'm talking to God." I
simply mistrust, and therefore question, systems where simple things are
overly involved.
like an excuse to me. We have monads, because they allow side-effects. Ok.
If programs that used side effects were uncommon, I'd be fine with them being
troublesome -- but they aren't. Maybe it is just me, but my Haskell programs
invariably develop a need for side effects within a few tens of lines of
code, whether IO, Maybe, or whatnot. And I can't help but think that
language support to make dealing with monads easier -- that is, to integrate
monads with the rest of the language, so as to alleviate the need for
constant lifting -- would be a Good Thing.
Hmmm. Could I say that Haskell requires "heavy lifting"?
--
### SER
### Deutsch|Esperanto|Francaise|Linux|XML|Java|Ruby|Aikido
### http://www.germane-software.com/~ser jabber.com:ser ICQ:83578737
### GPG: http://www.germane-software.com/~ser/Security/ser_public.gpg
_______________________________________________
Frederik Eaton
2005-09-09 20:59:16 UTC
Post by Keean Schupke
class Runnable x y where
run :: x -> y
instance Runnable (m a) (m a) where
run = id
instance Runnable (s -> m a) (s -> m a) where
run = id
instance (Monad m,Monad n,MonadT t m,Runnable (m a) (n a)) => Runnable (t m a) (n a) where
run = run . down
Interesting...
Post by Keean Schupke
run = down
The above is redundant, right?
Post by Keean Schupke
up :: m a -> t m a
down :: t m a -> m a
...
So, 'run' is more like a form of casting than running, right?

How do I use it to add two lists? Where do the 'run' applications go?
Do you have an explicit example?

I was trying to test things out, and I'm running into problems with
the type system, for instance when I declare:

class Cast x y where
cast :: x -> y

instance Monad m => Cast x (m x) where
cast = return

p1 :: (Monad m, Num a) => m (a -> a -> a)
p1 = cast (+)

it says:

Could not deduce (Cast (a1 -> a1 -> a1) (m (a -> a -> a)))
from the context (Monad m, Num a)
arising from use of cast' at runnable1.hs:14:5-8

But this should match the instance I declared, I don't understand what
the problem is.

Frederik
--
http://ofb.net/~frederik/
Frederik Eaton
2005-09-09 21:24:01 UTC
Anyway, if the idea is to ultimately wrap every value in an expression
like ([1,2]+[3,4]) in a 'run' application, that doesn't sound very
useful. Program structure might be improved, but it would be bloated
out by these calls. Also, I don't know what would happen to the

I think it would be more useful if the compiler took care of this
automatically. I think it would be worthwhile just for making

Frederik

P.S. By the way, did you misunderstand what I meant by 'automatic
lifting'? Note that I'm talking about "lift" as in 'liftM', not 'lift'
Post by Frederik Eaton
Post by Keean Schupke
class Runnable x y where
run :: x -> y
instance Runnable (m a) (m a) where
run = id
instance Runnable (s -> m a) (s -> m a) where
run = id
instance (Monad m,Monad n,MonadT t m,Runnable (m a) (n a)) => Runnable (t m a) (n a) where
run = run . down
Interesting...
Post by Keean Schupke
run = down
The above is redundant, right?
Post by Keean Schupke
up :: m a -> t m a
down :: t m a -> m a
...
So, 'run' is more like a form of casting than running, right?
How do I use it to add two lists? Where do the 'run' applications go?
Do you have an explicit example?
I was trying to test things out, and I'm running into problems with
class Cast x y where
cast :: x -> y
instance Monad m => Cast x (m x) where
cast = return
p1 :: (Monad m, Num a) => m (a -> a -> a)
p1 = cast (+)
Could not deduce (Cast (a1 -> a1 -> a1) (m (a -> a -> a)))
from the context (Monad m, Num a)
arising from use of cast' at runnable1.hs:14:5-8
But this should match the instance I declared, I don't understand what
the problem is.
Frederik
--
http://ofb.net/~frederik/
--
http://ofb.net/~frederik/
Wolfgang Lux
2005-09-08 09:17:13 UTC
Post by Frederik Eaton
I want the type system to be able to do "automatic lifting" of monads,
and have it interpreted as "do {a<-[1,2]; b<-[3,4]; return (a+b)}".
Are you sure that this is the interpretation you have in mind? The
expression do {a<-[1,2]; b<-[3,4]; return (a+b)} does *not* compute the
element-wise sum of the two lists, but returns the list [4,5,5,6]. To
me, this would be a very counter intuitive result for an expression
[1,2]+[3,4].

Wolfgang
Frederik Eaton
2005-09-08 10:02:38 UTC
Post by Wolfgang Lux
Post by Frederik Eaton
I want the type system to be able to do "automatic lifting" of monads,
and have it interpreted as "do {a<-[1,2]; b<-[3,4]; return (a+b)}".
Are you sure that this is the interpretation you have in mind? The
expression do {a<-[1,2]; b<-[3,4]; return (a+b)} does *not* compute the
element-wise sum of the two lists, but returns the list [4,5,5,6]. To
me, this would be a very counter intuitive result for an expression
[1,2]+[3,4].
Thanks for bringing up a good point. Yes, this is what I have in mind.

As I see it, the monadic interface for lists gives them the semantics
of (multi)sets. Adding two sets could only be interpreted as I have
said.

If you were adding, say, arrays, elementwise, the monad would be more
like a reader monad, which I also gave an example of, with the
parameter being the array index.

Furthermore, it's hard to see how one would elegantly flesh out the
semantics you propose for lists. What if the two lists have different
lengths? Thus I think set semantics is more appropriate for a list

Frederik
--
http://ofb.net/~frederik/
Ben Rudiak-Gould
2005-09-14 15:13:22 UTC
Post by Frederik Eaton
I want the type system to be able to do "automatic lifting" of monads,
[1,2]+[3,4]
and have it interpreted as "do {a<-[1,2]; b<-[3,4]; return (a+b)}".
The main problem is ambiguity: [[1]]++[[2]] could be [[1],[2]] or [[1,2]],
for example. If your proposal resolves this ambiguity in favor of one result
or the other, then I'm opposed to it -- it's far too easy in practice to
write code like this, expecting it to be lifted, and failing to notice that
it also has an interpretation without lifting (or the other way around).
This is the Perl FWIM disease[1] -- not that I dislike Perl, but people are

I have another proposal, though. Introduce a new keyword, which I'll call
"borrow" (the opposite of "return"), that behaves like a function of type
(Monad m) => m a -> a inside of do statements. More precisely, a do
expression of the form

do { ... ; ... borrow E ... ; ... }

is transformed into

do { ... ; x <- E ; ... x ... ; ... }

where x is a fresh variable. If more than one borrow form appears in the
same do statement, they are pulled out from left to right, which matches the
convention already used in liftM2, ap, mapM, etc.

Pros:

+ Pure sugar; no tricky interactions with the type system.

+ Nice symmetry between putting a value in a monad and taking it out.

+ Potentially helpful for beginners who ask "how do I get a String
from an IO String?"

Cons:

- Needs a new keyword.

- Pretends to be an expression but really isn't; perhaps
distinctive syntax would be better. (Inline "<-"?)

- Depends on enclosing do keyword: in particular, do { stmt } would
no longer be equivalent to stmt, if stmt contains "borrow".
Potential confusion.

- Potentially misleading for beginners (but then so is do notation,
and the keyword "class", and n+k patterns, and so on...)

-- Ben

Bjorn Lisper
2005-09-14 20:51:36 UTC
Post by Ben Rudiak-Gould
Post by Frederik Eaton
I want the type system to be able to do "automatic lifting" of monads,
[1,2]+[3,4]
and have it interpreted as "do {a<-[1,2]; b<-[3,4]; return (a+b)}".
The main problem is ambiguity: [[1]]++[[2]] could be [[1],[2]] or [[1,2]],
for example. If your proposal resolves this ambiguity in favor of one result
or the other, then I'm opposed to it -- it's far too easy in practice to
write code like this, expecting it to be lifted, and failing to notice that
it also has an interpretation without lifting (or the other way around).
This is the Perl FWIM disease[1] -- not that I dislike Perl, but people are
However, there is a way to resolve the ambiguity that can be claimed to be
the most natural one, and that is to always choose the "least possible"
lifting. In the example above, this would mean to interpret [[1]]++[[2]]
precisely as [[1]]++[[2]] (lift 0 levels) rather than [[1]++[2]] (lift 1
level). This is akin to choosing the most general type in the pure
Hindley-Milner type system, and it has the advantage that expressions that
are typable in the original type system, without lifting, retain their
semantics in the type system with lifting added.

Lifting (mainly of arithmetic operators) has been around for a long time in
array- and data parallel languages such as Fortran 90 and *lisp. The kind
of ambiguity mentioned here was first observed for nested data-parallel
languages like NESL, which use nested sequences as parallel data structures.

Now, whether to include this kind of lifting in Haskell or not is an
entirely different story. One complication would be to handle possible
think it would be interesting to be able to define application-specific
Haskell dialects, which can have this kind of lifting for a restricted set
of types and/or functions, whereas having it as a general feature of the
language would be quite problematic.

Björn Lisper
Ben Rudiak-Gould
2005-09-17 12:00:19 UTC
Post by Bjorn Lisper
However, there is a way to resolve the ambiguity that can be claimed to be
the most natural one, and that is to always choose the "least possible"
lifting. In the example above, this would mean to interpret [[1]]++[[2]]
precisely as [[1]]++[[2]] (lift 0 levels) rather than [[1]++[2]] (lift 1
level).
It's not the mathematics I'm worried about, it's the people. Consider the
"time flies like an arrow; fruit flies like a banana" example. What's
interesting about it is not just the existence of more than one parse, but
the fact that it's hard to even notice the other parses exist unless you're
looking for them, because people are so good at unconsciously resolving this
kind of ambiguity from context.

I'm afraid that once people get used to the idea that they can write
xs op ys and get implicit lifting, they will write xs ++ ys and never
notice that it has an unlifted interpretation which will take precedence.
This isn't the nastiest class of bug, but it's pretty nasty.

-- Ben
Bulat Ziganshin
2005-09-15 19:18:12 UTC
Hello Ben,

Wednesday, September 14, 2005, 6:32:27 PM, you wrote:

BRG> do { ... ; ... borrow E ... ; ... }

BRG> is transformed into

BRG> do { ... ; x <- E ; ... x ... ; ... }

i strongly support this suggestion. actually, i suggest the same for
dealing with references (IORef/MVar/...), for example:

do x <- newIORef 0
y <- newIORef 0
z <- newIORef 0
z := *x + *y -- translated to { x' <- readIORef x; y' <- readIORef y; writeIORef z (x'+y') }
--
Best regards,
Bulat mailto:***@HotPOP.com
Lyle Kopnicky
2005-09-15 19:33:31 UTC
Post by Bulat Ziganshin
Hello Ben,
BRG> do { ... ; ... borrow E ... ; ... }
BRG> is transformed into
BRG> do { ... ; x <- E ; ... x ... ; ... }
i strongly support this suggestion. actually, i suggest the same for
do x <- newIORef 0
y <- newIORef 0
z <- newIORef 0
z := *x + *y -- translated to { x' <- readIORef x; y' <- readIORef y; writeIORef z (x'+y') }
Right, I realize my suggestion is the same as Ben's. I just prefer a
more succinct notation, like special brackets instead of a keyword. I
like your idea about IORefs. I think it should work as well for
STRefs... perhaps it needs to belong to a type class, in a way?

- Lyle
Bulat Ziganshin
2005-09-15 20:25:49 UTC
Hello Lyle,
Post by Bulat Ziganshin
z := *x + *y -- translated to { x' <- readIORef x; y' <- readIORef y; writeIORef z (x'+y') }
LK> Right, I realize my suggestion is the same as Ben's. I just prefer a
LK> more succinct notation, like special brackets instead of a keyword. I
LK> like your idea about IORefs. I think it should work as well for
LK> STRefs... perhaps it needs to belong to a type class, in a way?

of course

class Ref c a where
new :: a -> IO (c a)
get :: c a -> IO a
set :: c a -> a -> IO ()
--
Best regards,
Bulat mailto:***@HotPOP.com
robert dockins
2005-09-15 21:06:23 UTC
I raise you:

class (Monad m) => Ref m c | c -> m
where new :: a -> m (c a)
get :: c a -> m a
peek :: c a -> m a
set :: c a -> a -> m ()
modify :: c a -> (a -> a) -> m a
modify_ :: c a -> (a -> a) -> m ()
modifyM :: c a -> (a -> m a) -> m a
modifyM_ :: c a -> (a -> m a) -> m ()
Post by Bulat Ziganshin
Hello Lyle,
Post by Bulat Ziganshin
z := *x + *y -- translated to { x' <- readIORef x; y' <- readIORef y; writeIORef z (x'+y') }
LK> Right, I realize my suggestion is the same as Ben's. I just prefer a
LK> more succinct notation, like special brackets instead of a keyword. I
LK> like your idea about IORefs. I think it should work as well for
LK> STRefs... perhaps it needs to belong to a type class, in a way?
of course
class Ref c a where
new :: a -> IO (c a)
get :: c a -> IO a
set :: c a -> a -> IO ()
Einar Karttunen
2005-09-15 21:07:47 UTC
Post by John Meacham
of course
class Ref c a where
new :: a -> IO (c a)
get :: c a -> IO a
set :: c a -> a -> IO ()
Maybe even:

class Ref m t where
new :: a -> m (t a)
get :: t a -> m a
set :: t a -> a -> m ()

Or if you want to support things like FastMutInts

class Ref m t v where
new :: v -> m t
get :: t -> v -> m a
set :: t -> v -> m ()

That would even support an evil IOArray instance:

instance Ref IO (IOArray Int v, Int) v where
new iv = newArray_ (0,iv)
get (arr,idx) = readArray arr idx
set (arr,idx) val = writeArray arr idx val

- Einar Karttunen
Sergey Zaharchenko
2005-09-16 12:43:42 UTC
Hello Bulat!
Post by Bulat Ziganshin
Hello Ben,
BRG> do { ... ; ... borrow E ... ; ... }
BRG> is transformed into
BRG> do { ... ; x <- E ; ... x ... ; ... }
i strongly support this suggestion. actually, i suggest the same for
do x <- newIORef 0
y <- newIORef 0
z <- newIORef 0
z := *x + *y -- translated to { x' <- readIORef x; y' <- readIORef y; writeIORef z (x'+y') }
I might be misunderstanding, but aren't we going to introduce evaluation
order for +' in this case?
--
DoubleF
No virus detected in this message. Ehrm, wait a minute...
/kernel: pid 56921 (antivirus), uid 32000: exited on signal 9
Oh yes, no virus:)
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 187 bytes
Desc: not available
Wolfgang Jeltsch
2005-09-16 14:36:36 UTC
Post by Sergey Zaharchenko
[...]
Post by Bulat Ziganshin
do x <- newIORef 0
y <- newIORef 0
z <- newIORef 0
z := *x + *y -- translated to { x' <- readIORef x; y' <- readIORef
y; writeIORef z (x'+y') }
I might be misunderstanding, but aren't we going to introduce evaluation
order for +' in this case?
I think so and I think this is the case also for the approaches of including
monadic expressions into "ordinary" expressions. That is, in my opinion, a
strong argument against these proposals. One thing I like about Haskell is
that side-effects are strictly separated from evaluation so that there is no
such thing like an implicit evaluation order.

Best wishes,
Wolfgang
Bulat Ziganshin
2005-09-16 15:54:48 UTC
Hello Wolfgang,

Friday, September 16, 2005, 5:55:52 PM, you wrote:
WJ> strong argument against these proposals. One thing I like about Haskell is
WJ> that side-effects are strictly separated from evaluation so that there is no
WJ> such thing like an implicit evaluation order.

like in assembler? ;)
--
Best regards,
Bulat mailto:***@HotPOP.com
Bulat Ziganshin
2005-09-16 15:54:14 UTC
Hello Sergey,
Post by Bulat Ziganshin
z := *x + *y -- translated to { x' <- readIORef x; y' <- readIORef y; writeIORef z (x'+y') }
SZ> I might be misunderstanding, but aren't we going to introduce evaluation
SZ> order for +' in this case?

of course. really, in many situations evaluation order is just not
matter, including all IORef usage i could imagine

btw, if someone interested, explicit operation for dereferencing
variables used in ML (afair, it called just "."), and in some old
system programming language, may be BLISS

of course, automatic dereferncing could be much better, but it rises
some potential double-readings just like auto-lifting (f.e., "z:=x" -
is z must have type "IORef a" or "IORef (IORef a)", if x have type
"IORef a" ? )
--
Best regards,
Bulat mailto:***@HotPOP.com
Udo Stenzel
2005-09-16 16:03:43 UTC
Post by Bulat Ziganshin
i strongly support this suggestion. actually, i suggest the same for
do x <- newIORef 0
y <- newIORef 0
z <- newIORef 0
z := *x + *y -- translated to { x' <- readIORef x; y' <- readIORef y; writeIORef z (x'+y') }
May I humbly suggest you explain what problem you are actually trying to
solve here? I've never felt the need to hide monadic binding behind
fancy syntax, defining some combinator was always sufficient. I mean...
if you want C, you know where to find it. If possible I'd rather not
see the same programming "style" in Haskell.

Udo.
--
<Marticus> There's too much blood in my caffeine system.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: Digital signature
Bulat Ziganshin
2005-09-16 19:25:38 UTC
Hello Udo,
Post by Bulat Ziganshin
do x <- newIORef 0
y <- newIORef 0
z <- newIORef 0
z := *x + *y -- translated to { x' <- readIORef x; y' <- readIORef y; writeIORef z (x'+y') }
US> May I humbly suggest you explain what problem you are actually trying to
US> solve here? I've never felt the need to hide monadic binding behind
US> fancy syntax, defining some combinator was always sufficient. I mean...
US> if you want C, you know where to find it. If possible I'd rather not
US> see the same programming "style" in Haskell.

you absolutely true - i need sometimes C and don't think to use it
just to write several lines, such as when i need to compute CRC of
data stream
--
Best regards,
Bulat mailto:***@HotPOP.com
Frederik Eaton
2005-09-15 21:24:10 UTC
Post by Ben Rudiak-Gould
I have another proposal, though. Introduce a new keyword, which I'll
call "borrow" (the opposite of "return"), that behaves like a
function of type (Monad m) => m a -> a inside of do statements. More
precisely, a do expression of the form
do { ... ; ... borrow E ... ; ... }
is transformed into
do { ... ; x <- E ; ... x ... ; ... }
where x is a fresh variable. If more than one borrow form appears in
the same do statement, they are pulled out from left to right, which
matches the convention already used in liftM2, ap, mapM, etc.
I think this is a good idea. I like the inline "<-", or maybe
something like "@".

I'm not sure what you intend to do about nested "do" statements,
though. If they correspond to different monads, I might want to have a
'borrow' in the inner "do" statement create a lifted expression in the
outer "do" statement. Furthermore, I might want to have a lifted
expression in the outer "do" create something which needs to be
evaluated again in the monad of the inner "do" to produce the final
value.

In any case, it would certainly be good to have better support for
lifting; and something which doesn't weaken the type system is likely
to be implemented before something that does is, so I am in favor of
investigation along the lines of your proposal.

Frederik
--
http://ofb.net/~frederik/
Ben Rudiak-Gould
2005-09-17 18:37:15 UTC
Post by Frederik Eaton
I think this is a good idea. I like the inline "<-", or maybe
The operator-section notation (<- expr) has the big advantage of being
unlikely to collide with any other syntax proposals.
Post by Frederik Eaton
I'm not sure what you intend to do about nested "do" statements,
though. If they correspond to different monads, I might want to have a
'borrow' in the inner "do" statement create a lifted expression in the
outer "do" statement.
In such cases you almost certainly wouldn't want to use this notation
anyway. It's confusing enough with a single monad.

As an experiment I tried rewriting some monadic code from one of my projects
(Reform) using the (<- expr) syntax. It was interesting. Some points:

* Most of my uses of <- could be replaced with the inline form. In
some cases it seemed like a bad idea, because the intermediate
variable name was useful as documentation. In the other cases I'd
obviously chosen a meaningless intermediate variable name just to
get around the lack of a feature like this one.

* I found myself writing "do return" a lot, which isn't a combination
you usually see in Haskell code. It felt odd, but perhaps you'd get
used to it.

* The new syntax is really nice as a replacement for the annoyingly
common "x <- foo ; case x of..." idiom that I've always disliked.

* The increased similarity to programming in an eager language was
startling at times. I'm just not used to thinking eagerly when I
write Haskell, even though I do it all the time in other languages.

* There are tricky corner cases. For example,

x <- getByte
if x < 128
then return x
else return (x - 128) * 256 + (<- getByte)

doesn't do what it's probably intended to do: the second byte will
be read even if x < 128. You have to write "else do return"
instead of "else return". I'm afraid this would be easy to get
wrong. It wouldn't be hard to make the translation assume an
implicit do at the branches of if and case statements, but this
wouldn't always be what you'd want. Even worse is that short-
circuiting && and || don't work as they do in eager languages.

So on reflection I'm not sure I think this proposal is a good idea. I
probably wouldn't be able to write or read code in this style without doing
manual desugaring in my head, which takes away much of the appeal. But it
might be worth adding if people can be trusted to use it judiciously.

-- Ben
Thomas Jäger
2005-09-17 23:49:04 UTC
Hello,

I haven't followed this discussion very closely, but in case you want to
play with this sort of thing, you can check out the code from my
TMR-Article

Despite the wacky implementation it is actually surprisingly reliable,
modulo some (linear) implicit parameter quirks.

The [[1,2,3,4]] vs. [[1],[2],[3],[4]] ambiguity is resolved using the
same method as the borrow/<- proposal (by a function called reflect');
and the function reify' is corresponding to the enclosing do. A notable
difference is that my implementation always shows the information
whether a specific expression is a monadic' one in the type. E.g.

*Reflection> reify [1,2,3,4] :: [[Int]]
[[1,2,3,4]]
*Reflection> [reify 1, reify 2,reify 3,reify 4] :: [[Int]]
[[1],[2],[3],[4]]
*Reflection> reify [reflect [1,2], reflect [3,4]] :: [[Int]]
[[1,3],[1,4],[2,3],[2,4]]

Here [reflect [1,2], reflect [3,4]] has the type Monadic [] [Int] (where
Monadic is a type synonym involving linear implicit parameters...).

However, when doing such a thing, one must decide for an evaluation
order. Because of the way it is implemented, my code uses Haskell's lazy
strategy but that makes reasoning about the resulting programs very hard
(though I believe it can be made precise):

*Reflection> reify (tail $[reflect [1,2],3,4]) :: [[Int]] [[3,4]] *Reflection> reify (tail$!! [reflect [1,2],3,4]) :: [[Int]]
[[3,4],[3,4]]

(here, $!! is deepSeq'ed application) An eager strategy might be less obscure, but suffers from the problems described by Ben. It should also be noted that a Call-By-Name-like strategy can be emulated with appropriate types, as in: *Reflection> reify (let x :: Monadic [] Int; x = reflect [1,2] in [x,x]) [[1,1],[1,2],[2,1],[2,2]] *Reflection> reify (let x :: Int; x = reflect [1,2] in [x,x]) [[1,1],[2,2]] Of course, in case of no explicit type signature, the monomorphism restriction plays an infamous role and the behavior depends on the flag -fno-monomorphism-restriction. Thomas Graham Klyne 2004-03-23 22:13:43 UTC Permalink Post by Sean E. Russell I think you're a rather stuck with the "temporary variables" (which they'= re not really), but it might be possible to hide some of the untidiness in an auxiliary monadic function. That seems to be the common suggestion: write my own visitors. I'm just surprised that there isn't a more elegant mechanism for getting=20 interoperability between monadic and non-monadic functions. The current=20 state of affairs just seems awkward. =20 [Warning: quasi-rant] Caveat: I'm not smart enough, and I don't know enough, to criticize Haskell= ,=20 so please don't misconstrue my comments. To quote Einstein: "When I'm aski= ng=20 simple questions and I'm getting simple answers, I'm talking to God." I=20 simply mistrust, and therefore question, systems where simple things are=20 overly involved. The standard explaination about why monads are so troublesome always sounds= =20 like an excuse to me. We have monads, because they allow side-effects. Ok= =2E =20 If programs that used side effects were uncommon, I'd be fine with them bei= ng=20 troublesome -- but they aren't. Maybe it is just me, but my Haskell progra= ms=20 invariably develop a need for side effects within a few tens of lines of=20 code, whether IO, Maybe, or whatnot. And I can't help but think that=20 language support to make dealing with monads easier -- that is, to integrat= e=20 monads with the rest of the language, so as to alleviate the need for=20 constant lifting -- would be a Good Thing. I'd make one further point in response... I don't think the "heavy lifting" here is because of Monads in general: some Monads (Maybe, lists, etc) mix very easily with pure functional code. I think it's the IO Monad in particular: here you are creating interactions between two fundamentally different environments: the real-world, which is fundamentally stateful and non-referentially-transparent, and mathematical functions that are quite the opposite. Post by Sean E. Russell Hmmm. Could I say that Haskell requires "heavy lifting"? :-) #g ------------ Graham Klyne For email: http://www.ninebynine.org/#Contact Iavor S. Diatchki 2004-03-23 19:34:32 UTC Permalink hi, at some level you are right that some more syntactic sugar and stuff could make monads more atracitve. for the time being here is how i'd write what you want bellow: f # m = liftM f m mx === my = liftM2 (==) m1 m2 assertBool "fail"$ (length # someFunc a) === (length # someFunc b)

at the moment we only have syntactic sugar for working with the list monad
parameters).

hope this helps
-iavor
Post by Sean E. Russell
Hello,
I posted this question to comp.lang.functional, and someone suggested that I
I'm struggling with monads. Well, not monads themselves, but mixing them with
someFunc :: String -> IO [a]
...
ax <- someFunc a
bx <- someFunc b
assertBool "fail" $length ax == length bx I don't like the assignments; the typing is redundant, if I have a lot of asserts like this, and the "variables" are temporary. What I'd much rather ... assertBool "fail"$ (length $someFunc a) == (length$ someFunc b)
which is more readable, to my eye.
The only solution which has been suggested that may work is liberal use of the
liftM variants, but this gets *really* tedious and obtuse.
Is there an elegant way to do what I want to do, or am I stuck with
procedural-style assignments and bunches of temp vars?
Thanks!
Ben Rudiak-Gould
2004-03-23 23:04:48 UTC
Post by Sean E. Russell
like an excuse to me. We have monads, because they allow side-effects. Ok.
If programs that used side effects were uncommon, I'd be fine with them being
troublesome -- but they aren't. Maybe it is just me, but my Haskell programs
invariably develop a need for side effects within a few tens of lines of
code, whether IO, Maybe, or whatnot. And I can't help but think that
language support to make dealing with monads easier -- that is, to integrate
monads with the rest of the language, so as to alleviate the need for
constant lifting -- would be a Good Thing.
I agree with this, but the support you describe is difficult to add.

Programming languages differ not so much in what they make possible as in
what they make easy. In Haskell (to pick just a few examples):

* Memory management (allocation and deallocation) is effortless.

* Creating lexical closures is very easy.

* You don't have to declare the types of all your functions and local
bindings, because the implementation can figure them out for itself.

* You don't have to ensure that values are computed before they're used,
because the implementation handles that too.

If you were learning C instead of Haskell, you'd be complaining (and
rightly so) about the effort required to do these things in C.

Unfortunately, no one has figured out how to make everything easy at the
same time, and the problem you've run into is an example of this. The
monad I/O model exists because of implicit data dependencies (the last
bullet point above); the "heavy lifting" exists because of type inference.
We'd all love to make the lifting implicit, but no one knows how to do it
without breaking the whole language.

A related problem which was discussed here recently is Haskell's lack of
per-type namespaces, something which even C programmers take for granted.
Again, the problem is the tricky interaction with type inference.

Unless/until these problems are resolved, all you can do is learn a bunch
of different languages and use the one which is most convenient for a
particular task. Haskell, for all its problems, is a strong contender for

-- Ben
Sean E. Russell
2004-03-23 23:54:11 UTC
Post by Ben Rudiak-Gould
* Memory management (allocation and deallocation) is effortless.
* Creating lexical closures is very easy.
* You don't have to declare the types of all your functions and local
bindings, because the implementation can figure them out for itself.
* You don't have to ensure that values are computed before they're used,
because the implementation handles that too.
If you were learning C instead of Haskell, you'd be complaining (and
rightly so) about the effort required to do these things in C.
Actually, most of these things are pretty easy in Ruby, too. Everything is
easy in Ruby :-) But what I like about Haskell is stuff like pattern
matching and list comprehension. I also particularly like the effortless
strong typing. Before Haskell, I thought strong typing meant variable type
declarations, which are tedious.

When I find myself doing design and writing pseudocode, and having it look a
lot like a language I use, I decide that the language is Good. So far, this
has happened to me with both Haskell and Ruby -- Haskell at a higher level,
when I'm thinking about functions that I'll need, Ruby when I'm sketching out
algorithms. In eight years of coding Java professionally, I *never* found
myself writing any pseudo-code that looked anything like Java.
Post by Ben Rudiak-Gould
We'd all love to make the lifting implicit, but no one knows how to do it
without breaking the whole language.
I've heard people talk about the functional "purity" of Haskell -- you'd have
to break this purity to add implicit lifting?
Post by Ben Rudiak-Gould
A related problem which was discussed here recently is Haskell's lack of
per-type namespaces, something which even C programmers take for granted.
Again, the problem is the tricky interaction with type inference.
Augh! Yes! I've hit that as well. Well, in my case, it was constructors. I
was trying to do:

data Effort = Easy | Middle | Hard
data Impact = Low | Middle | High

Effort and Impact aren't related (in any useful ontological sense that I can
think of), so I don't want to make them instance of the same type -- but I
can't reuse Middle without it (AFAIK). So I had to fudge, and call the
Impact constructor "Medium", which sort of grates, as you can imagine.

Yeah, that's something I'd like a work-around for, too.
Post by Ben Rudiak-Gould
Unless/until these problems are resolved, all you can do is learn a bunch
of different languages and use the one which is most convenient for a
particular task. Haskell, for all its problems, is a strong contender for
What I use Haskell for are those tasks where I really want something compiled.
What surprised me, after I'd used it for a few weeks, was that my
applications were more robust than I expected, given the novelty of the
language to me. If I were to write any software (in a language that I know)
upon which lives depended, it'd be in Haskell.

I find that, with Ruby, I don't struggle -- the code just flows -- but I
*liberally* use unit testing, and even so I spend a fair amount of time
debugging . With Haskell, I spend some effort figuring out how to solve the
problem and some time getting it to compile, but once it does, it generally
works as expected. Very rarely do I need to debug code that compiles. With
Java, I spend a fair amount of time getting stuff compiled and running, and
then even more time debugging, and then more time fixing stuff later.

The most amazing thing to me about Java is how little the compilation phase
contributes to making code more robust. My Ruby is no more buggy than my
Java, and it is loosely typed and entirely interpreted. In fact, I generally
trust Java applications less than Ruby applications.

But, I fear I'm wandering off-topic. Thanks to everybody for the feedback
about my problem; I have a working solution using the visitor pattern, and
as Maybe -- are less troublesome), it is "good enough".
--
### SER
### Deutsch|Esperanto|Francaise|Linux|XML|Java|Ruby|Aikido
### http://www.germane-software.com/~ser jabber.com:ser ICQ:83578737
### GPG: http://www.germane-software.com/~ser/Security/ser_public.gpg
ozone at algorithm.com.au ()
2004-03-24 01:04:09 UTC
Post by Sean E. Russell
Post by Ben Rudiak-Gould
We'd all love to make the lifting implicit, but no one knows how to do it
without breaking the whole language.
I've heard people talk about the functional "purity" of Haskell -- you'd have
to break this purity to add implicit lifting?
I don't think you would break the functional purity of Haskell if you
did such a thing, but you'd probably need some new syntax to help out
the type system. Perhaps something like:

assertBool "fail" $length (<somefunc a>) == length (<somefunc b>) So here, (<a>) performs the same function as the <- operator, but assigns the result to an "anonymous variable" and is instantly consumed by whatever function uses its value. Please don't berate me for my choice of syntax, by the way: that was just an example :). One problem with this is that you now don't know what order those two operations take place: does "somefunc a" run first, or does "somefunc b" run first? You have this same problem in other imperative languages too; I'm not sure if, e.g. C defines an order for function evaluation to occur. Perhaps you could just dictate a left-to-right order, or maybe come up with bizarre (<1 ... >) and (<2 ... >) constructs to indicate what order things should run in. Some ideas to get started, anyway. I do agree with you that not having syntactic sugar to do something like that is somewhat inconvenient, and it would be a nice addition to the language. Post by Sean E. Russell Post by Ben Rudiak-Gould A related problem which was discussed here recently is Haskell's lack of per-type namespaces, something which even C programmers take for granted. Again, the problem is the tricky interaction with type inference. Augh! Yes! I've hit that as well. Well, in my case, it was constructors. I data Effort = Easy | Middle | Hard data Impact = Low | Middle | High Perhaps one option for this would be to have explicitly quantified data namespaces, so that you would have to write 'Effort.Middle' to construct something of type Effort, and 'Impact.Middle' to construct something of type Impact. The problems you mention of in Haskell are certainly solvable: it's just the mere matter of implementing such features ... :) I think the Haskell community (at least, the community who are capable of making such changes to the Haskell implementations) is unfortunately a bit too small to put such syntactic niceties in the language; we simply don't have enough human resources to do it. But I'm sure plenty of others would agree that those features would be nice! -- % Andre Pang : trust.in.love.to.save Glynn Clements 2004-03-24 14:37:22 UTC Permalink Post by ozone at algorithm.com.au () Post by Sean E. Russell Post by Ben Rudiak-Gould We'd all love to make the lifting implicit, but no one knows how to do it without breaking the whole language. I've heard people talk about the functional "purity" of Haskell -- you'd have to break this purity to add implicit lifting? I don't think you would break the functional purity of Haskell if you did such a thing, but you'd probably need some new syntax to help out assertBool "fail"$ length (<somefunc a>) == length (<somefunc b>)
So here, (<a>) performs the same function as the <- operator, but
assigns the result to an "anonymous variable" and is instantly consumed
by whatever function uses its value. Please don't berate me for my
choice of syntax, by the way: that was just an example :).
One problem with this is that you now don't know what order those two
operations take place: does "somefunc a" run first, or does "somefunc
b" run first? You have this same problem in other imperative languages
too; I'm not sure if, e.g. C defines an order for function evaluation
to occur.
C doesn't define the order in which function arguments are evaluated,
so:

foo(getchar(), getchar());

could be equivalent to either:

a = getchar();
b = getchar();
foo(a, b);
or:
a = getchar();
b = getchar();
foo(b, a);

This isn't restricted to I/O, but affects any operation which has side
effects, e.g.

foo(++x, ++x);

This is *why* I/O actions aren't treated as functions in pure
functional languages.

Regarding the "solution" of using liftM (liftM2 etc): these impose a
left-to-right evaluation order, e.g.:

liftM2 :: (Monad m) => (a -> b -> c) -> (m a -> m b -> m c)
liftM2 f = \a b -> do { a' <- a; b' <- b; return (f a' b') }

One consequence of this is that, even if (==) is an equivalence
relation, "liftM2 (==)" may not be, as the order of the arguments is
significant.

More generally, lifted functions may have semantics which differ
greatly from the underlying function. Personally, I'm quite happy that
Haskell doesn't allow this to be hidden by implicit lifting.
--
Glynn Clements <***@virgin.net>
Duncan Coutts
2004-03-24 00:50:43 UTC
Post by Sean E. Russell
someFunc :: String -> IO [a]
...
ax <- someFunc a
bx <- someFunc b
assertBool "fail" $length ax == length bx I don't like the assignments; the typing is redundant, if I have a lot of asserts like this, and the "variables" are temporary. What I'd much rather ... assertBool "fail"$ (length $someFunc a) == (length$ someFunc b)
which is more readable, to my eye.
The only solution which has been suggested that may work is liberal use of the
liftM variants, but this gets *really* tedious and obtuse.
Is there an elegant way to do what I want to do, or am I stuck with
procedural-style assignments and bunches of temp vars?
For a project I did which involved a lot of monadic code I used some
combinators which allow you to write in a more applicative/functional

do
a' <- foo a
b' <- foo b
c' <- foo c
return $f a' b' c' you write: f$> foo a <$> foo b <$> foo c

Unfortunately it doesn't work so well with infix operators, you'd have
to say (==) $> firstThing <$> secondThing
which is not quite so appealing.

Your example would look like so:
assertBool "fail" <$$> (==) > (length > someFunc a) <> (length > someFunc b) Here's the little combinator library, it's really just a matter of using infix versions of standard monad functions and playing with their left/right associativity and precedence. import Monad (liftM, ap) infixl 1 >, <> --same as liftM & ap, but left associative infix infixr 0 <$$> --same as =<<, but different precedence

($>) :: Monad m => (a -> b) -> (m a -> m b) ($>) = liftM

(<$>) :: Monad m => m (a -> b) -> (m a -> m b) (<$>) = ap

(<$$>) :: Monad m => (a -> m b) -> (m a -> m b) (<$$>) = (=<<)

--
Duncan
Max Kirillov
2004-03-24 04:06:28 UTC
Post by Sean E. Russell
someFunc :: String -> IO [a]
...
ax <- someFunc a
bx <- someFunc b
assertBool "fail" $length ax == length bx ... assertBool "fail"$ (length $someFunc a) == (length$ someFunc b)
which is more readable, to my eye.
Monads are not just a fascistic typing feture. They are to ensure the
order of actions. Your first version makes clear (and makes sure)
that 'someFunc a' is executed before 'someFunc b'. The second does not.

If you don't need that, maybe you should inspect why your someFunc has
type '.. -> IO ..'. Usually that means that it involves some IO, so
you may not ignore the order of actions.
--
Max
2005-09-08 17:12:01 UTC
One of Mark Jones's articles suggests something like

class Plus a b c | a b -> c where
(+) :: a -> b -> c

Would

instance (Plus a b c, Monad m) => Plus (m a) (m b) (m c) where
mx + my = do x <- mx
y <- my
return (x + y)

do what you're looking for?

Computational Mathematics Group
Pacific Northwest National Laboratory

"Time flies like an arrow; fruit flies like a banana." -- Groucho Marx

------------------------------------------------------------------
Original message:

Hi,

Sean's comment (yeah, it was like a billion years ago, just catching
up) is something that I've often thought myself.

I want the type system to be able to do "automatic lifting" of monads,
i.e., since [] is a monad, I should be able to write the following:

[1,2]+[3,4]

and have it interpreted as "do {a<-[1,2]; b<-[3,4]; return (a+b)}".

Also, I would have

The point I want to make is that this is much more general than IO or
monads! I think we all understand intuitively what mathematicians mean

{1,2}+{3,4} (i.e. { x+y | x\in {1,2}, y\in {3,4}})

(f+g)(x) where f(x)=x+1 and g(x)=x+4

So "automatic lifting" is a feature which is very simple to describe,
but which gives both of these notations their intuitive mathematical
meaning - not to mention making monadic code much tidier (who wants to
spend their time naming variables which are only used once?). I think
it deserves more attention.

I agree that in its simplest incarnation, there is some ugliness: the
order in which the values in the arguments are extracted from their
monads could be said to be arbitrary. Personally, I do not think that
this in itself is a reason to reject the concept. Because of currying,
you think of the proposed operation not as lifting, but as inserting
aps:

return f ap x1 ap ... ap xn

then the ordering problem doesn't seem like such a big deal. I mean,
what other order does one expect, than one in which the arguments are
read in the same order that 'f' is applied to them?

Although it is true that in most of the instances where this feature
would be used, the order in which arguments are read from their monads
will not matter; yet that does not change the fact that in cases where
order *does* matter it's pretty damn easy to figure out what it will
be. For instance, in

two lines are read and then printed. Does anybody for a moment
question what order the lines should be read in?

Frederik
Frederik Eaton
2005-09-08 21:12:15 UTC
One of Mark Jones's articles suggests something like
class Plus a b c | a b -> c where
(+) :: a -> b -> c
Would
instance (Plus a b c, Monad m) => Plus (m a) (m b) (m c) where
mx + my = do x <- mx
y <- my
return (x + y)
do what you're looking for?

I'm not sure exactly what you have in mind. Obviously I want something
that applies to all functions, with any number of arguments, and not
just (+). Furthermore, it should handle cases like 1+[2,3] where only
one value is monadic. Keean Schupke's suggestion sounds more likely to
be useful, but I'm still reading it. In any case, a minimum of

Frederik
------------------------------------------------------------------
Hi,
Sean's comment (yeah, it was like a billion years ago, just catching
up) is something that I've often thought myself.
I want the type system to be able to do "automatic lifting" of monads,
[1,2]+[3,4]
and have it interpreted as "do {a<-[1,2]; b<-[3,4]; return (a+b)}".
Also, I would have
The point I want to make is that this is much more general than IO or
monads! I think we all understand intuitively what mathematicians mean
{1,2}+{3,4} (i.e. { x+y | x\in {1,2}, y\in {3,4}})
(f+g)(x) where f(x)=x+1 and g(x)=x+4
So "automatic lifting" is a feature which is very simple to describe,
but which gives both of these notations their intuitive mathematical
meaning - not to mention making monadic code much tidier (who wants to
spend their time naming variables which are only used once?). I think
it deserves more attention.
I agree that in its simplest incarnation, there is some ugliness: the
order in which the values in the arguments are extracted from their
monads could be said to be arbitrary. Personally, I do not think that
this in itself is a reason to reject the concept. Because of currying,
you think of the proposed operation not as lifting, but as inserting
return f ap x1 ap ... ap xn
then the ordering problem doesn't seem like such a big deal. I mean,
what other order does one expect, than one in which the arguments are
read in the same order that 'f' is applied to them?
Although it is true that in most of the instances where this feature
would be used, the order in which arguments are read from their monads
will not matter; yet that does not change the fact that in cases where
order *does* matter it's pretty damn easy to figure out what it will
be. For instance, in
two lines are read and then printed. Does anybody for a moment
question what order the lines should be read in?
Frederik
--
http://ofb.net/~frederik/
John Meacham
2005-09-09 00:23:40 UTC
Post by Frederik Eaton
One of Mark Jones's articles suggests something like
class Plus a b c | a b -> c where
(+) :: a -> b -> c
Would
instance (Plus a b c, Monad m) => Plus (m a) (m b) (m c) where
mx + my = do x <- mx
y <- my
return (x + y)
do what you're looking for?
I'm not sure exactly what you have in mind. Obviously I want something
that applies to all functions, with any number of arguments, and not
just (+). Furthermore, it should handle cases like 1+[2,3] where only
one value is monadic. Keean Schupke's suggestion sounds more likely to
be useful, but I'm still reading it. In any case, a minimum of
I think what he means is that if we had a somewhat better design of the
prelude type classes, we would be able to do this in haskell now for
most interesting operations. as we could write an instance like

instance (Monad m,Num a) => Num (m a) where
..
..

of course, we can't do this because Num has Ord and Show as superclasses
when it really doesn't need to. (we would have to create a separate
class for 'pattern matchable nums' if we got rid of those, but that is
no problem other than being non-haskell-98 compatable). Solving this
'class inflexibility' problem in general is something I have given some
thought too. I will let everyone know if I figure something out...

John
--
John Meacham - ?repetae.net?john?
Aaron Denney
2005-09-09 09:20:57 UTC
Post by John Meacham
of course, we can't do this because Num has Ord and Show as superclasses
when it really doesn't need to. (we would have to create a separate
class for 'pattern matchable nums' if we got rid of those, but that is
no problem other than being non-haskell-98 compatable). Solving this
'class inflexibility' problem in general is something I have given some
thought too. I will let everyone know if I figure something out...
Has anyone found any problem with your "superclasses" proposal?
http://repetae.net/john/recent/out/supertyping.html
--
Aaron Denney
-><-
Wolfgang Jeltsch
2005-09-09 11:03:28 UTC
Post by Frederik Eaton
I'm not sure exactly what you have in mind. Obviously I want something
that applies to all functions, with any number of arguments, and not
just (+). Furthermore, it should handle cases like 1+[2,3] where only
one value is monadic. Keean Schupke's suggestion sounds more likely to
be useful, but I'm still reading it. In any case, a minimum of
Frederik
Hello,

I doubt that it is a good thing to extend the language in a way that such far
reaching declarations are automatically generated. I would like to have more
control about which things are declared and which are not and also in which
way the are declared. In addition, I'm against giving a specific class
(Monad) a very special position among all classes.

One of the advantages of functional programming languages is that the language
directly supports very little features but is so powerful that you can define
new features by using the language. Maybe, it would be good that those
people who want this automatic lifting of functions implement it using

Best regards,
Wolfgang
Malcolm Wallace
2005-09-09 11:25:06 UTC
Post by Wolfgang Jeltsch
Post by Frederik Eaton
I'm not sure exactly what you have in mind. Obviously I want something
that applies to all functions, with any number of arguments, and not
just (+). Furthermore, it should handle cases like 1+[2,3] where only
I doubt that it is a good thing to extend the language in a way that such
far reaching declarations are automatically generated.
I agree. The original request was for something like
[1,2] + [3,4]
to be automatically lifted into a monad. But surely it is not too
difficult to define the required behaviour precisely (and only)
where needed, e.g.

(+.) = liftM2 (+)

[1,2] +. [3,4]

Where the functions in question are not infix, you don't even need to
define a new name, just use (liftM fn) directly inline!

Regards,
Malcolm
Keean Schupke
2005-09-09 12:21:07 UTC
Post by Malcolm Wallace
Post by Wolfgang Jeltsch
Post by Frederik Eaton
I'm not sure exactly what you have in mind. Obviously I want something
that applies to all functions, with any number of arguments, and not
just (+). Furthermore, it should handle cases like 1+[2,3] where only
I doubt that it is a good thing to extend the language in a way that such
far reaching declarations are automatically generated.
I agree. The original request was for something like
[1,2] + [3,4]
to be automatically lifted into a monad. But surely it is not too
difficult to define the required behaviour precisely (and only)
where needed, e.g.
(+.) = liftM2 (+)
[1,2] +. [3,4]
Why not make the monad an instance of Num, then you do not proliferate
meaningless
similar symbols... besides which I am sure all the good ones are used in
(like +. <+> etc) ;)

instance (Monad m, Show a) => Show (m a)
...
instance (Monad m, Ord a) => Ord (m a)
...
instance (Monad m, Num a,Show (m a),Ord (m a)) -> Num (m a) where
(+) = liftM2 (+)

The instances for Show and Ord can be empty if you don't need the
functionality...

Regards,
Keean.
Keean Schupke
2005-09-09 12:36:36 UTC
Post by Frederik Eaton
I'm not sure exactly what you have in mind. Obviously I want something
that applies to all functions, with any number of arguments, and not
just (+). Furthermore, it should handle cases like 1+[2,3] where only
Just noticed the 1+[1,2] case... I am not certain whether this is
possible - it is outside the
scope of the formal definiton of Haskell and may rely on implementation
details of the compiler/interpreter.

Effectivly we need to redefine list as a class, then (Num a) can be made
an instance of the class... See my implementation of Joy in the HList
library. (this lifts numbers into an AST rather than a list) - this
however uses type level programming and has problems with non static
types (IE you need to use existentials for lists who's values is not
known at compile time)...

The easy answer is to define a type that contains both singletons and
lists... although the type constructors may not look as neat.

Regards,
Keean.
J. Garrett Morris
2005-09-09 12:46:09 UTC
Post by Keean Schupke
Just noticed the 1+[1,2] case... I am not certain whether this is
possible - it is outside the
scope of the formal definiton of Haskell and may rely on implementation
details of the compiler/interpreter.
While this is outside the scope of the current Num class, if we adopt
the suggestion to redefine the standard classes using functional
dependencies (which is, I think, a useful addition to the standard
prelude anyway), we have, for example:

class Plus a b c | a b -> c
where (+) :: a -> b -> c

instance (Plus a b c, Functor f) => a -> f b -> f c
where a + fb = fmap (a +) fb

and so forth. However, this doesn't generalize obviously (in any way
that I see) to generic two argument functions, which seemed to be what
Frederik wanted.

/g
Aaron Denney
2005-09-09 19:03:33 UTC
Post by Keean Schupke
Post by Frederik Eaton
I'm not sure exactly what you have in mind. Obviously I want something
that applies to all functions, with any number of arguments, and not
just (+). Furthermore, it should handle cases like 1+[2,3] where only
Just noticed the 1+[1,2] case... I am not certain whether this is
possible - it is outside the
scope of the formal definiton of Haskell and may rely on implementation
details of the compiler/interpreter.
Effectivly we need to redefine list as a class, then (Num a) can be made
an instance of the class... See my implementation of Joy in the HList
library. (this lifts numbers into an AST rather than a list) - this
however uses type level programming and has problems with non static
types (IE you need to use existentials for lists who's values is not
known at compile time)...
The easy answer is to define a type that contains both singletons and
lists... although the type constructors may not look as neat.
I thought the easy answer would be to inject non-monadic values into the
--
Aaron Denney
-><-
Frederik Eaton
2005-09-09 22:37:24 UTC
By the way, I thought it would be obvious, but a lot of people seem to
be missing the fact that I'm not (as Sean, I believe, isn't)
requesting limited support for 1 or 2 or 3 argument functions or
certain type classes to be applied to monads, or for certain
operations to defined on certain types. I know at least how to define
type classes and functions. If this is what I wanted I would probably
do it myself.
Post by Aaron Denney
I thought the easy answer would be to inject non-monadic values into the
I don't know if this is the right way of looking at it. Do you have an
example?

My idea is that you should be able to have code like this:

-- (a)

m3 :: a -> m b

m6 = do
m1
m2
m3 (p1 (p2 p3 (p4 m4 p5)) p6)
m5

- where the m* values are functions returning monads and the p* values
are so-called "pure" functions, i.e. functions which don't take monad
values or return monad results (so currently the above code won't
type-check beacuse of m4) - but have it be interpreted as:

-- (b)

m3 :: a -> m b

m6 = do
m1
m2
v <- m4
m3 (p1 (p2 p3 (p4 v p5) p6)
m5

Note that in (a), "pure" values are never used where monads are asked
for, only the other way around.

I think that supporting syntax (a) for semantics (b) should be a
feature because: (1) it is (usually) obvious what (a) means; (2) it
eliminates the single-use variable 'v' - single-use variables like
this occur a lot in monadic Haskell code, and I think they make it
harder to read and write; (3) it would support the math-like syntax
that I presented in my original message.

It might be hard to modify the type checker to get it to work, but I
think it is possible, and I see no reason not to be as general as
possible.

Would it mean treating the 'Monad' class specially? Perhaps, but I
don't think this is a reason to avoid it. Further, it is likely that
whatever is done to extend the type checker could be given a general
meta-declaration in the same spirit as "infixr" etc.

Also, I do not think that template haskell is powerful enough to
support this, but I'm willing to be proven wrong.

Frederik
--
http://ofb.net/~frederik/
Cale Gibbard
2005-09-09 23:53:04 UTC
Despite having a fairly mathematical background, I don't really care
for the proposed syntax.

myList :: [[Integer]]
myList = return [1,2,3,4]

Is myList equal to [[1,2,3,4]] or [[1],[2],[3],[4]]? Either
interpretation is possible if there is automatic lifting about. If the
lifting only occurs when a type error would otherwise have happened,
then there will be cases where genuine type errors are happening and
being obscured by automatic lifting.

This basically takes a type error reported at compile time, which, in
the case where it would have been solved by lifting, is easily
resolved by simply adding a line to a do-block or by using liftM, and
turns it into a potential behavioural error which may only be detected
at runtime, and whose source in the code may not be so obvious (since
the lifting was unintentional in the first place).

Also, a class instance, say of Num for lists (treating them as
polynomials/power series) would suddenly turn one valid piece of code,
like [1,2,3] + [4,5,6] defined by automatic lifting, into a completely
different one, and suddenly silently introduce bugs into previously
written code in the module (perhaps written by another author who had
intended to use the automatic lifting).

I think that's reason enough to make people say what they mean in each
case. Automatic lifting is performed in mathematics because it is
assumed that the reader is an intelligent human who will be able to
infer quite reasonably what is meant in each (often somewhat
ambiguous) case. Haskell programs are not written only for humans, but
also for the Haskell compiler, which can't be expected to (and quite
possibly shouldn't try to) judge the intent of a piece of code.

- Cale
Post by Frederik Eaton
By the way, I thought it would be obvious, but a lot of people seem to
be missing the fact that I'm not (as Sean, I believe, isn't)
requesting limited support for 1 or 2 or 3 argument functions or
certain type classes to be applied to monads, or for certain
operations to defined on certain types. I know at least how to define
type classes and functions. If this is what I wanted I would probably
do it myself.
Post by Aaron Denney
I thought the easy answer would be to inject non-monadic values into the
I don't know if this is the right way of looking at it. Do you have an
example?
-- (a)
m3 :: a -> m b
m6 = do
m1
m2
m3 (p1 (p2 p3 (p4 m4 p5)) p6)
m5
- where the m* values are functions returning monads and the p* values
are so-called "pure" functions, i.e. functions which don't take monad
values or return monad results (so currently the above code won't
-- (b)
m3 :: a -> m b
m6 = do
m1
m2
v <- m4
m3 (p1 (p2 p3 (p4 v p5) p6)
m5
Note that in (a), "pure" values are never used where monads are asked
for, only the other way around.
I think that supporting syntax (a) for semantics (b) should be a
feature because: (1) it is (usually) obvious what (a) means; (2) it
eliminates the single-use variable 'v' - single-use variables like
this occur a lot in monadic Haskell code, and I think they make it
harder to read and write; (3) it would support the math-like syntax
that I presented in my original message.
It might be hard to modify the type checker to get it to work, but I
think it is possible, and I see no reason not to be as general as
possible.
Would it mean treating the 'Monad' class specially? Perhaps, but I
don't think this is a reason to avoid it. Further, it is likely that
whatever is done to extend the type checker could be given a general
meta-declaration in the same spirit as "infixr" etc.
Also, I do not think that template haskell is powerful enough to
support this, but I'm willing to be proven wrong.
Frederik
--
http://ofb.net/~frederik/
_______________________________________________
Frederik Eaton
2005-09-10 07:51:04 UTC
These are good arguments, and I think this is a good direction for the
discussion, should it continue.
Post by Cale Gibbard
Despite having a fairly mathematical background, I don't really care
for the proposed syntax.
myList :: [[Integer]]
myList = return [1,2,3,4]
I'm assuming you mean

myList :: [[Integer]]
myList = [1,2,3,4]
Post by Cale Gibbard
Is myList equal to [[1,2,3,4]] or [[1],[2],[3],[4]]? Either
interpretation is possible if there is automatic lifting about. If the
lifting only occurs when a type error would otherwise have happened,
then there will be cases where genuine type errors are happening and
being obscured by automatic lifting.
Perhaps. The question is whether enough type errors will be left over
to keep type checking useful. I'm willing to explore the possibility
that there are. This is not the same as an all-purpose, C++-style
"cast" operator. It only applies to Monads, and it never makes them
disappear, it only introduces and propagates them. The effect on the
type system is thus circumscribed. You can only introduce a Monad in

I think there may be better examples than the one you gave. To me it
it seems intuitive that myList should be [[1,2,3,4]]. It also seems
intuitive that this can be turned into a general rule, although I'm
not accustomed to this sort of reasoning.

By the way, I'm not entirely confident in the definition of "monadic
context" that I gave in a message dated "Wed, 7 Sep 2005 23:41:41
-0700".
Post by Cale Gibbard
This basically takes a type error reported at compile time, which, in
the case where it would have been solved by lifting, is easily
resolved by simply adding a line to a do-block or by using liftM, and
turns it into a potential behavioural error which may only be detected
at runtime, and whose source in the code may not be so obvious (since
the lifting was unintentional in the first place).
This is true, if you see the two values as essentialy different.

But if you intended to write [[1,2],[3,4]], and you're complaining
that the type system would no longer save you from omitting the outer
and middle brackets by accident, then I would say: but even now it
doesn't save you from omitting just the middle brackets by accident.
You still have to watch what you type.

And I would say, furthermore, that many of the errors that the type
system currently catches are a result of programs being too far from
their specifications, in the present syntax, and that if my proposal
succeeds in bringing them closer, then it isn't clear that there won't
be a net decrease in errors as a result. Many of the technologies
which Haskell researchers are inventing to make programming easier
already have this effect - of making errors harder to detect, but
compensating by making programs more concise. Strong typing is a great
language feature, but it isn't the only one.
Post by Cale Gibbard
Also, a class instance, say of Num for lists (treating them as
polynomials/power series) would suddenly turn one valid piece of code,
like [1,2,3] + [4,5,6] defined by automatic lifting, into a completely
different one, and suddenly silently introduce bugs into previously
written code in the module (perhaps written by another author who had
intended to use the automatic lifting).
Again, I'm trying to imagine a better example. In this case,
personally: even if I weren't forced to, and even if it just wrapped a
list, I would create my own polynomial datatype with 'newtype'. Just
for the sake of clarity. So it would not be a problem. Am I being
obtuse?
Post by Cale Gibbard
... Automatic lifting is performed in mathematics because it is
assumed that the reader is an intelligent human who will be able to
infer quite reasonably what is meant in each (often somewhat
ambiguous) case. Haskell programs are not written only for humans,
but also for the Haskell compiler, which can't be expected to (and
quite possibly shouldn't try to) judge the intent of a piece of
code.
This is a most persuasive observation.

And it is often true that these authors we invoke for mathematicians
will explicitly "declare" what they mean before adding functions or
sets, as I am being asked to do. However, they'll also define
explicitly what a homomorphism is, over and over again, separately;
for groups, rings, fields, etc.

But the fact is that we don't really use separate concepts for
homomorphisms on different categories, when we think about them. And
if we look closely we can even discover a simple and unambiguous rule
uniting them, and we can apply this rule to programming language
design. The fact that the general rule was also given to us in the
form of different specialized versions, by mathematicians, let alone
by programmers, doesn't mean that we shouldn't try to do better.

I think the same is true for a large class of mathematical shorthand
(and I think that the importance of notation is underrated). I think
most such shorthand can be understood simply in terms of lifting, and
I hypothesize that we can find an automatic lifting rule along the
lines I've described which will not be as ambiguous as you suggest.

Frederik
Post by Cale Gibbard
Post by Frederik Eaton
By the way, I thought it would be obvious, but a lot of people seem to
be missing the fact that I'm not (as Sean, I believe, isn't)
requesting limited support for 1 or 2 or 3 argument functions or
certain type classes to be applied to monads, or for certain
operations to defined on certain types. I know at least how to define
type classes and functions. If this is what I wanted I would probably
do it myself.
Post by Aaron Denney
I thought the easy answer would be to inject non-monadic values into the
I don't know if this is the right way of looking at it. Do you have an
example?
-- (a)
m3 :: a -> m b
m6 = do
m1
m2
m3 (p1 (p2 p3 (p4 m4 p5)) p6)
m5
- where the m* values are functions returning monads and the p* values
are so-called "pure" functions, i.e. functions which don't take monad
values or return monad results (so currently the above code won't
-- (b)
m3 :: a -> m b
m6 = do
m1
m2
v <- m4
m3 (p1 (p2 p3 (p4 v p5) p6)
m5
Note that in (a), "pure" values are never used where monads are asked
for, only the other way around.
I think that supporting syntax (a) for semantics (b) should be a
feature because: (1) it is (usually) obvious what (a) means; (2) it
eliminates the single-use variable 'v' - single-use variables like
this occur a lot in monadic Haskell code, and I think they make it
harder to read and write; (3) it would support the math-like syntax
that I presented in my original message.
It might be hard to modify the type checker to get it to work, but I
think it is possible, and I see no reason not to be as general as
possible.
Would it mean treating the 'Monad' class specially? Perhaps, but I
don't think this is a reason to avoid it. Further, it is likely that
whatever is done to extend the type checker could be given a general
meta-declaration in the same spirit as "infixr" etc.
Also, I do not think that template haskell is powerful enough to
support this, but I'm willing to be proven wrong.
Frederik
--
http://ofb.net/~frederik/
_______________________________________________
--
http://ofb.net/~frederik/
Cale Gibbard
2005-09-11 06:29:15 UTC
Post by Frederik Eaton
These are good arguments, and I think this is a good direction for the
discussion, should it continue.
Post by Cale Gibbard
Despite having a fairly mathematical background, I don't really care
for the proposed syntax.
myList :: [[Integer]]
myList = return [1,2,3,4]
I'm assuming you mean
myList :: [[Integer]]
myList = [1,2,3,4]
No, the confusion is over whether to lift the return, because the
inner and outer monads are the same. It's either return [1,2,3,4] ==
[[1,2,3,4]], or it's liftM return [1,2,3,4] == [[1],[2],[3],[4]]
Post by Frederik Eaton
Post by Cale Gibbard
Is myList equal to [[1,2,3,4]] or [[1],[2],[3],[4]]? Either
interpretation is possible if there is automatic lifting about. If the
lifting only occurs when a type error would otherwise have happened,
then there will be cases where genuine type errors are happening and
being obscured by automatic lifting.
Perhaps. The question is whether enough type errors will be left over
to keep type checking useful. I'm willing to explore the possibility
that there are. This is not the same as an all-purpose, C++-style
"cast" operator. It only applies to Monads, and it never makes them
disappear, it only introduces and propagates them. The effect on the
type system is thus circumscribed. You can only introduce a Monad in
I think there may be better examples than the one you gave. To me it
it seems intuitive that myList should be [[1,2,3,4]]. It also seems
intuitive that this can be turned into a general rule, although I'm
not accustomed to this sort of reasoning.
By the way, I'm not entirely confident in the definition of "monadic
context" that I gave in a message dated "Wed, 7 Sep 2005 23:41:41
-0700".
Post by Cale Gibbard
This basically takes a type error reported at compile time, which, in
the case where it would have been solved by lifting, is easily
resolved by simply adding a line to a do-block or by using liftM, and
turns it into a potential behavioural error which may only be detected
at runtime, and whose source in the code may not be so obvious (since
the lifting was unintentional in the first place).
This is true, if you see the two values as essentialy different.
But if you intended to write [[1,2],[3,4]], and you're complaining
that the type system would no longer save you from omitting the outer
and middle brackets by accident, then I would say: but even now it
doesn't save you from omitting just the middle brackets by accident.
You still have to watch what you type.
Well, the concern isn't so much for explicit lists as such (as errors
there are quite visible), but when the lists involved are abstract
anyway, perhaps hidden in some chain of compositions.
Post by Frederik Eaton
And I would say, furthermore, that many of the errors that the type
system currently catches are a result of programs being too far from
their specifications, in the present syntax, and that if my proposal
succeeds in bringing them closer, then it isn't clear that there won't
be a net decrease in errors as a result. Many of the technologies
which Haskell researchers are inventing to make programming easier
already have this effect - of making errors harder to detect, but
compensating by making programs more concise. Strong typing is a great
language feature, but it isn't the only one.
Post by Cale Gibbard
Also, a class instance, say of Num for lists (treating them as
polynomials/power series) would suddenly turn one valid piece of code,
like [1,2,3] + [4,5,6] defined by automatic lifting, into a completely
different one, and suddenly silently introduce bugs into previously
written code in the module (perhaps written by another author who had
intended to use the automatic lifting).
Again, I'm trying to imagine a better example. In this case,
personally: even if I weren't forced to, and even if it just wrapped a
list, I would create my own polynomial datatype with 'newtype'. Just
for the sake of clarity. So it would not be a problem. Am I being
obtuse?
Well, that's true, but many types are instances of Monad and instances
of another class, and often not in a way which is compatible with your
form of lifting. That's the generic version of the point which I'd
hoped would be extracted from the example.
Post by Frederik Eaton
Post by Cale Gibbard
... Automatic lifting is performed in mathematics because it is
assumed that the reader is an intelligent human who will be able to
infer quite reasonably what is meant in each (often somewhat
ambiguous) case. Haskell programs are not written only for humans,
but also for the Haskell compiler, which can't be expected to (and
quite possibly shouldn't try to) judge the intent of a piece of
code.
This is a most persuasive observation.
And it is often true that these authors we invoke for mathematicians
will explicitly "declare" what they mean before adding functions or
sets, as I am being asked to do. However, they'll also define
explicitly what a homomorphism is, over and over again, separately;
for groups, rings, fields, etc.
But the fact is that we don't really use separate concepts for
homomorphisms on different categories, when we think about them. And
if we look closely we can even discover a simple and unambiguous rule
uniting them, and we can apply this rule to programming language
design. The fact that the general rule was also given to us in the
form of different specialized versions, by mathematicians, let alone
by programmers, doesn't mean that we shouldn't try to do better.
I think the same is true for a large class of mathematical shorthand
(and I think that the importance of notation is underrated). I think
most such shorthand can be understood simply in terms of lifting, and
I hypothesize that we can find an automatic lifting rule along the
lines I've described which will not be as ambiguous as you suggest.
Why not look for a way to do the lifting which is explicit, yet not
inconvenient? Monads often do a great job of this on their own. One
can already write code with a fair amount of convenience which works
across monads and which determines its effects based on its inputs.

It may be possible to use template haskell to derive an automatic
monad lifter which could be applied in a controlled fashion, and if
people really like it, it could be given some special syntax perhaps.

Another point which is perhaps important to mention is that the
monadification of some code is not really something which is unique.
The right level of abstraction is sometimes not to just apply liftMn
to everything, but to use something stronger, like MonadPlus, and to
be careful about how everything happens. This can come up when
producing a nondeterministic abstraction of an initially deterministic
algorithm. Full nondeterminism without any optimisation is often not
what you want. You might want to work with a subset of the
combinations of the inputs that may not at first be obvious, and there
are various optimisations which may be necessary to curb combinatorial
explosion (even though laziness often does help a lot here). To do
this sort of optimisation, you need an instance of MonadPlus (or at
least MonadZero, but that's not around anymore). Also, if the whole
thing is autolifted, you can't inject these controls into the middle,
and end up lifting the code by hand anyway. Careful application of the
generic lifting could be very useful, but unrestrained, in many cases
could completely fail to be helpful.

Also, with multiparameter functions, liftMn isn't always the right
thing to apply. Maybe some parameters should be lifted to one monad
and others to another, and the results collected in a suitable
combination. This sort of thing is hard to do automatically, and it
can be difficult to tell.

What we probably want is an explicit, but generic, lifting mechanism,
which takes care of the boilerplate parts of the lifting, but which we
can control with some precision. Doing it automatically everywhere
seems like it could often be inconvenient.

- Cale
Tomasz Zielonka
2005-09-16 11:00:17 UTC
Post by Cale Gibbard
Post by Frederik Eaton
These are good arguments, and I think this is a good direction for the
discussion, should it continue.
Post by Cale Gibbard
Despite having a fairly mathematical background, I don't really care
for the proposed syntax.
myList :: [[Integer]]
myList = return [1,2,3,4]
I'm assuming you mean
myList :: [[Integer]]
myList = [1,2,3,4]
No, the confusion is over whether to lift the return, because the
inner and outer monads are the same. It's either return [1,2,3,4] ==
[[1,2,3,4]], or it's liftM return [1,2,3,4] == [[1],[2],[3],[4]]
I think that Frederik's example is even better. Here you either lift the
entire list or each individual number:

return [1,2,3,4]

or

[return 1,return 2,return 3,return 4]

After all, return is only a fancy name for liftM0 ;-)

Best regards
Tomasz
Yitzchak Gale
2005-09-10 19:15:22 UTC
I heartily agree with everything Cale wrote
on this topic.

In addition, I hereby apologize to Claus for
being too lazy to participate in the survey.

Regards,
Yitz
Post by Cale Gibbard
Despite having a fairly mathematical background, I don't really care
for the proposed syntax.
myList :: [[Integer]]
myList = return [1,2,3,4]
Is myList equal to [[1,2,3,4]] or [[1],[2],[3],[4]]? Either
interpretation is possible if there is automatic lifting about. If the
lifting only occurs when a type error would otherwise have happened,
then there will be cases where genuine type errors are happening and
being obscured by automatic lifting.
This basically takes a type error reported at compile time, which, in
the case where it would have been solved by lifting, is easily
resolved by simply adding a line to a do-block or by using liftM, and
turns it into a potential behavioural error which may only be detected
at runtime, and whose source in the code may not be so obvious (since
the lifting was unintentional in the first place).
Also, a class instance, say of Num for lists (treating them as
polynomials/power series) would suddenly turn one valid piece of code,
like [1,2,3] + [4,5,6] defined by automatic lifting, into a completely
different one, and suddenly silently introduce bugs into previously
written code in the module (perhaps written by another author who had
intended to use the automatic lifting).
I think that's reason enough to make people say what they mean in each
case. Automatic lifting is performed in mathematics because it is
assumed that the reader is an intelligent human who will be able to
infer quite reasonably what is meant in each (often somewhat
ambiguous) case. Haskell programs are not written only for humans, but
also for the Haskell compiler, which can't be expected to (and quite
possibly shouldn't try to) judge the intent of a piece of code.
- Cale
Post by Frederik Eaton
By the way, I thought it would be obvious, but a lot of people seem to
be missing the fact that I'm not (as Sean, I believe, isn't)
requesting limited support for 1 or 2 or 3 argument functions or
certain type classes to be applied to monads, or for certain
operations to defined on certain types. I know at least how to define
type classes and functions. If this is what I wanted I would probably
do it myself.
Post by Aaron Denney
I thought the easy answer would be to inject non-monadic values into the
I don't know if this is the right way of looking at it. Do you have an
example?
-- (a)
m3 :: a -> m b
m6 = do
m1
m2
m3 (p1 (p2 p3 (p4 m4 p5)) p6)
m5
- where the m* values are functions returning monads and the p* values
are so-called "pure" functions, i.e. functions which don't take monad
values or return monad results (so currently the above code won't
-- (b)
m3 :: a -> m b
m6 = do
m1
m2
v <- m4
m3 (p1 (p2 p3 (p4 v p5) p6)
m5
Note that in (a), "pure" values are never used where monads are asked
for, only the other way around.
I think that supporting syntax (a) for semantics (b) should be a
feature because: (1) it is (usually) obvious what (a) means; (2) it
eliminates the single-use variable 'v' - single-use variables like
this occur a lot in monadic Haskell code, and I think they make it
harder to read and write; (3) it would support the math-like syntax
that I presented in my original message.
It might be hard to modify the type checker to get it to work, but I
think it is possible, and I see no reason not to be as general as
possible.
Would it mean treating the 'Monad' class specially? Perhaps, but I
don't think this is a reason to avoid it. Further, it is likely that
whatever is done to extend the type checker could be given a general
meta-declaration in the same spirit as "infixr" etc.
Also, I do not think that template haskell is powerful enough to
support this, but I'm willing to be proven wrong.
Frederik
--
http://ofb.net/~frederik/
_______________________________________________
Claus Reinke
2005-09-10 00:34:57 UTC
life is funny, isn't it? so many people so eagerly discussing conversion
opinions and suggestions on this very topic only a short while ago,
we got a total of 4 (four) replies - all quite useful, mind you, so we were
grateful, but still one wonders.. we might have assumed that not many
people cared after all:

shall I assume that all participants in this discussion have joined
problems of monadic lifting?-) in which case I'd invite you to have
a look at that survey and the papers mentioned.
Post by Frederik Eaton
Post by Aaron Denney
I thought the easy answer would be to inject non-monadic values into the
I'd phrase it slightly differently: what (I think) one wants are implicit coercions
computations in the current monad to get a non-monadic result if only that is
needed (although one might think of the latter as partially lifting the operation

only I wouldn't want those implicit coercions to be introduced unless
programmers explicitly ask for that (one usually only converts code from
non-monadic to monadic once, and while the details of that step might
be tiresome and in need of tool-support, the step itself should be explicit
- see my comment on (2) below).
Post by Frederik Eaton
Note that in (a), "pure" values are never used where monads are asked
for, only the other way around.
that is probably where some would beg to differ - if you lift operations,
why not lift values as well?
Post by Frederik Eaton
I think that supporting syntax (a) for semantics (b) should be a
feature because: (1) it is (usually) obvious what (a) means; (2) it
eliminates the single-use variable 'v' - single-use variables like
this occur a lot in monadic Haskell code, and I think they make it
harder to read and write; (3) it would support the math-like syntax
that I presented in my original message.
(1) "(usually) obvious" is tech-speak for "(perhaps) possible to
figure out, though probably not uniquely determined"?-)

when mathematicians abuse notation in the "obvious" way,
there is usually an assumed context in which the intended
abuses are clearly defined (if not, there is another context
in which the "obvious" things will go unexpectedly awry).

and execution of monadic computations. if you want everything
mixed into one soup, ML might be your language of choice
(http://portal.acm.org/citation.cfm?id=178047 , if I recall correctly?
see the paper discussed in
http://lambda-the-ultimate.org/node/view/552
for one application that demonstrates the power/danger of such

(3) using math-like syntax for lifted expressions is common practice
in some families of Haskell-DSELs, eg. Conal Elliot's Fran. As John
pointed out, the predefined class-hierarchy is not really helpful
for such endeavours, but if one isn't picky, one may ignore classes
not used.. the "trick" is to lift even constants, so when you get
to applications, all components are already lifted, and lifting most
arithmetic works out fine (Boolean operations are another matter).

note, however, that the resulting language, while looking
mathematically pure and permitting concise expression of complex
circumstances, may not have the reasoning properties you expect..
Post by Frederik Eaton
It might be hard to modify the type checker to get it to work, but I
think it is possible, and I see no reason not to be as general as
possible.
here I'd agree, although in contrast to you, I'd be talking about a
complex refactoring under programmer control, not about an implicitly
invoked collection of coercions. I played with that idea after Martin
Erwig visited our refactoring project in March, and got to a prototype
type-coercion inference system for a very simple functional language,
because I found the situation with various existing and, as Erwig/Ren
pointed out, apparently unrelated monadification algorithms confusing.

apart from the various styles of monadification, which we'd like
to permit, and have the programmer select, e.g., by type annotations,
there is the slight problem that there are an unbounded number of
different monadifications (more if one wants to keep annotations to
a minimum), so one needs a sensible bound (one that does not
exclude any of the alternatives one might want). one also might
want to be able to choose between the alternatives (or tune the
system so that taking the first choice works out ok most of the
time). oh, and it shouldn't be too inefficient, and it is really a pain
to re-implement a type-system just to add a few coercion rules to
it (which is why I haven't extended my mini fpl to Haskell yet..).

in light of this, perhaps some more participants in this discussion
might want to look into contributing their suggestions to our old
survey?

cheers,
claus
Frederik Eaton
2005-09-11 00:06:40 UTC
Post by Claus Reinke
life is funny, isn't it? so many people so eagerly
lazily, in my case
Post by Claus Reinke
I'm trying to discuss a new syntax, not code transformations. I agree
that the two are related. I'm interested in the latter, but I don't
understand it very well. I think of refactoring as an operation that
takes source code to source code, i.e. unlike most operations done on
source code, refactoring produces output which is meant to be edited
by humans. Is this correct? But if it is, doesn't it mean that one
would like refactorizations to have some ill-defined "reversibility"
property:

a refactorization should have an inverse which commutes with simple
edits

For instance, if I (a) rename a variable, and then (b) introduce a new
reference to the renamed variable somewhere, I can later decide to
change the name back, reverting (a), without losing the work I did in
the meantime in (b). I can do this by applying another rename
operation, which will also affect the new reference.

Or, if I (a) take a bit of code and remove it to a separate function,
and then (b) modify the body of that function, I can later decide to
inline the function back into the one place which calls it, thus
reverting (a), without losing the modification done in (b).

Yet, I don't see how the "monadification" operations you propose could
have this property. They are certainly code transformations! But they
seem irreversible - once I (a) apply your transformations and (b) edit
the output, I can't revert (a) without losing the work done in (b).
Changes to the code become tightly coupled, design becomes less
tractable.
Post by Claus Reinke
yet when we asked for your opinions and suggestions on this very
topic only a short while ago, we got a total of 4 (four) replies -
all quite useful, mind you, so we were grateful, but still one
wonders.. we might have assumed that not many people cared after
It might have been more useful to ask for survey replies to be sent to
the list. Often the various opinions of a large number of people can
be compressed to a few representative positions. But if respondents
can't see what opinions have been expressed so far, then this
time-saving compression becomes impossible. That is just my opinion.
Post by Claus Reinke
shall I assume that all participants in this discussion have joined
problems of monadic lifting?-) in which case I'd invite you to have
a look at that survey and the papers mentioned.
I should do that, yes!

It's just that I was a bit late, having misplaced my trumpet.
Post by Claus Reinke
Post by Frederik Eaton
Post by Aaron Denney
I thought the easy answer would be to inject non-monadic values into the
I'd phrase it slightly differently: what (I think) one wants are implicit coercions
computations in the current monad to get a non-monadic result if only that is
needed (although one might think of the latter as partially lifting the operation
only I wouldn't want those implicit coercions to be introduced unless
programmers explicitly ask for that (one usually only converts code from
non-monadic to monadic once, and while the details of that step might
be tiresome and in need of tool-support, the step itself should be explicit
- see my comment on (2) below).
Post by Frederik Eaton
Note that in (a), "pure" values are never used where monads are asked
for, only the other way around.
that is probably where some would beg to differ - if you lift operations,
why not lift values as well?
Oh, one should do both, I was just giving a case where value-lifting
didn't happen, as a counterexample to Aaron's viewpoint.
Post by Claus Reinke
Post by Frederik Eaton
I think that supporting syntax (a) for semantics (b) should be a
feature because: (1) it is (usually) obvious what (a) means; (2) it
eliminates the single-use variable 'v' - single-use variables like
this occur a lot in monadic Haskell code, and I think they make it
harder to read and write; (3) it would support the math-like syntax
that I presented in my original message.
(1) "(usually) obvious" is tech-speak for "(perhaps) possible to
figure out, though probably not uniquely determined"?-)
when mathematicians abuse notation in the "obvious" way,
there is usually an assumed context in which the intended
abuses are clearly defined (if not, there is another context
in which the "obvious" things will go unexpectedly awry).
and execution of monadic computations. if you want everything
mixed into one soup, ML might be your language of choice
(http://portal.acm.org/citation.cfm?id=178047 , if I recall correctly?
see the paper discussed in
http://lambda-the-ultimate.org/node/view/552
for one application that demonstrates the power/danger of such
(3) using math-like syntax for lifted expressions is common practice
in some families of Haskell-DSELs, eg. Conal Elliot's Fran. As John
pointed out, the predefined class-hierarchy is not really helpful
for such endeavours, but if one isn't picky, one may ignore classes
not used.. the "trick" is to lift even constants, so when you get
to applications, all components are already lifted, and lifting most
arithmetic works out fine (Boolean operations are another matter).
note, however, that the resulting language, while looking
mathematically pure and permitting concise expression of complex
circumstances, may not have the reasoning properties you expect..
At this point I think we have to look at more examples. I'm not
convinced that my position is right, but I'm not convinced that that
it is wrong either. I just think it's promising, based on my
experience. If I were a better person, and had more free time, I would
work on producing examples and working things out myself, and perhaps
write a paper or something.

Of course, experienced people are disagreeing with me, so maybe I
should just accept their good judgment! In any case, I'm afraid I
don't have much more to contribute, beyond the idea itself.
Post by Claus Reinke
Post by Frederik Eaton
It might be hard to modify the type checker to get it to work, but I
think it is possible, and I see no reason not to be as general as
possible.
here I'd agree, although in contrast to you, I'd be talking about a
complex refactoring under programmer control, not about an implicitly
invoked collection of coercions. I played with that idea after Martin
Erwig visited our refactoring project in March, and got to a prototype
type-coercion inference system for a very simple functional language,
because I found the situation with various existing and, as Erwig/Ren
pointed out, apparently unrelated monadification algorithms confusing.
apart from the various styles of monadification, which we'd like
to permit, and have the programmer select, e.g., by type annotations,
there is the slight problem that there are an unbounded number of
different monadifications (more if one wants to keep annotations to
a minimum), so one needs a sensible bound (one that does not
exclude any of the alternatives one might want). one also might
want to be able to choose between the alternatives (or tune the
system so that taking the first choice works out ok most of the
time). oh, and it shouldn't be too inefficient, and it is really a pain
to re-implement a type-system just to add a few coercion rules to
it (which is why I haven't extended my mini fpl to Haskell yet..).
Well, I've said a little about why I don't like irreversible
refactoring. I've been reading "Notes on the Synthesis of Form" by
Christopher Alexander, I think this has helped me understand the
process of design in more abstract terms, if you want a reference.

I think the source code of a program should be as close to its initial
specification as possible. The goal should be to make it as easy to
read and modify as it was to write. However, if the design is spread
out over write/test/debug cycles, as is often the case in interpreted
languages such as perl; or refactor cycles, as you seem to be
proposing; then a lot of the decisions and rationales which determine
that design will not be visible in the final version of the code -
rather, they will be stored in the succession of modifications which
have been applied to create this final version. But these
modifications are, as it were, difficult to go back and modify. The
more a program is produced through a process of accretion or
evolution, the more tightly coupled various aspects of its design will
be, and the more difficult it will be to change any one of them.

Even if most of the design decisions are good ones, eventually the
number of unfixable bad decisions will grow until continued
development of a particular component becomes untenable. This might
happen at a function level or a module level or a program level - but
in any case I think the development style in question should be
avoided where possible, and made avoidable where feasible.

Frederik
Aaron Denney
2005-09-10 03:55:12 UTC
Post by Frederik Eaton
Post by Aaron Denney
I thought the easy answer would be to inject non-monadic values into the
I don't know if this is the right way of looking at it. Do you have an
example?
In a do block,
1 + [2,3,4]

would get turned into

liftM2 (+) (return 1) [2, 3, 4]

(I actually think this whole thing is a horrible idea, much for the
reasons Cale Gibbard puts forward.)
Post by Frederik Eaton
Would it mean treating the 'Monad' class specially? Perhaps, but I
don't think this is a reason to avoid it. Further, it is likely that
whatever is done to extend the type checker could be given a general
meta-declaration in the same spirit as "infixr" etc.
--
Aaron Denney
-><-
Wolfgang Jeltsch
2005-09-10 22:59:00 UTC
Post by Aaron Denney
[...]
But the do syntax isn't a very drastic special treatment of monads. There is
a relatively simple syntax-based transformation into code without do
expressions.

Best wishes,
Wolfgang
Wolfgang Jeltsch
2005-09-10 22:45:02 UTC
Post by Frederik Eaton
[...]
Would it mean treating the 'Monad' class specially? Perhaps, but I
don't think this is a reason to avoid it.
As far as I can see, your approach would make Haskell a kind of imperative
programming language. Side-effects would be hidden in expressions which is a
thing I want to see strictly avoided.
Post by Frederik Eaton
[...]
Also, I do not think that template haskell is powerful enough to
support this, but I'm willing to be proven wrong.
I suppose that Template Haskell is powerful enough to automatically declare
instances of classes like Num for monadic types, based on instances for
Post by Frederik Eaton
Frederik
Best wishes,
Wolfgang
Lyle Kopnicky
2005-09-15 04:34:42 UTC
It appears to me that:

* Many people don't like having to "extract values" from a monad on a
separate line, but would like to be able to mix monadic return values
into pure expressions, on the way to calculating a monadic result.

* Some people want to fix this by doing an implicit lifting operation on
functions, when their parameters are monadic

* It is not really clear what values are "monadic" and which aren't, so
the implicit lifting is a guessing game, and likely to produce
confounding errors.

* If you aren't going to be precise about what's in the monad and what's
not, you might as well use ML.

I propose instead an explicit, lightweight notation, with a simple

do
putStr "What is your name? "
s <- getLine
putStrLn ("Hello " ++ s)

I would like to write something like:

do
putStr "What is your name? "
putStrLn ("Hello " ++ {* getLine *})

In other words, we use some kind of special brackets (I don't care what)
to indicate that this value needs to be extracted from the monad on a
previous line, and inserted back here as a "pure" value. If these occur
multiply nested, they can be flattened out according to a dependency
rule. E.g.,

do
m1 {* m2 {* m3 *} v4 *} {* m5 *}

...would be rewritten as...

do
v3 <- m3
v2 <- m2 v3 v4
v5 <- m5
m1 v2 v5

...and...

do
m1 {* {* m2 *} *}

...would become...

do
m2a <- m2
v2 <- m2a
m1 v2

I'm sure this has been suggested before... but what do folks think?

- Lyle