Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reads of undef memory must not cause the behavior to be undefined in general. #30500

Closed
mahkoh opened this issue Dec 20, 2015 · 27 comments
Closed

Comments

@mahkoh
Copy link
Contributor

mahkoh commented Dec 20, 2015

The following is a list of behavior which is forbidden in all Rust code

  • Reads of undef (uninitialized) memory

This cannot be because memcpy will read padding bytes which are undef. It's also not true in practice because in

let x: u8 = undef;
let y: u16 = x as u16 + 0xab00;
let z: u16 = y & 0xff00;

z will be 0xab00 and not undef.

@steveklabnik
Copy link
Member

/cc @rust-lang/lang

@eefriedman
Copy link
Contributor

This cannot be because memcpy will read padding bytes which are undef.

I'm not sure what you mean here; the fact that rustc sometimes generates a call to memcpy is mostly irrelevant to the semantics of Rust code.

z will be 0xab00 and not undef.

The LLVM add instruction behaves this way, but the rustc "+" operator isn't guaranteed to translate directly to the LLVM add instruction. In fact, it doesn't in overflow checking mode.

@mahkoh
Copy link
Contributor Author

mahkoh commented Dec 20, 2015

@eefriedman

  1. Not if memcpy is itself written in rust code.
  2. It doesn't make a difference. x as u16 will have the upper byte zeroed.

@mahkoh
Copy link
Contributor Author

mahkoh commented Dec 20, 2015

The reason for the current rule seems to be: "I don't know how to formulate the real rules so I'll simply disallow it completely."

@eefriedman
Copy link
Contributor

Not if memcpy is itself written in rust code.

There isn't any fundamental need for it to be legal to write memcpy in Rust... it's part of the runtime. Granted, it would be convenient in some cases. Maybe we can add a special-case for "copying" an undef Copy value .

It doesn't make a difference. x as u16 will have the upper byte zeroed.

You're not looking at this at the right level. "add" is opaque; in theory, it could involve indexing into an array using the values of the operands, which could crash the program if undef is involved.

Anyway, trying to make promises about how exactly arithmetic is implemented leads down a path which isn't really productive.

@mahkoh
Copy link
Contributor Author

mahkoh commented Dec 20, 2015

There isn't any fundamental need for it to be legal to write memcpy in Rust

Whew

Maybe we can add a special-case for "copying" an undef Copy value .

That's the opposite of what should be done. The formulation must be more abstract so that everything legal can be done with undef while still disallowing all that that causes the behavior to be undefined. Maybe one even has to go so far as to add a primitive size one type that cannot be interpreted as any other type (without transmute) and has exactly one value that spans all u8 values. One might interpret it as a one byte type with one byte padding but it's not really padding.

You're not looking at this at the right level.

I see that adding an example did more harm than good.

@huonw
Copy link
Member

huonw commented Dec 20, 2015

It seems like spec'ing the example of addition/bitmasks would also require expanding the notion of "undefined value" to be at the bit level, and for the language to have some idea about results of (special cases of) operators. The latter seems like a rather open ended space, with very complicated properties encodable. This isn't a blocker, but it does mean touching this requires some care. (Byte-level undef would work for that specific example, but it seems restrictive---what if u8 was bool and 0xFF00 was 0xFFFE---and even then, operators still need to be understood at the byte level.)

@mahkoh
Copy link
Contributor Author

mahkoh commented Dec 20, 2015

require expanding the notion of "undefined value" to be at the bit level

Does the rust manual define undef at all? IIRC it only links to the llvm manual which already talks about undef at the bit level.

It seems like spec'ing the example of addition/bitmasks would also

The example is not important. It's just supposed to show that, at the llvm level, working with undef can lead to definite results and doesn't necessarily lead to undef propagation. Actually using undef values in rust code for anything remotely complicated can easily lead to UB because of undef propagation. However, there is no reason for the manual to claim that reading undef always leads to UB which is much stricter than what llvm requires.

@huonw
Copy link
Member

huonw commented Dec 20, 2015

Rust isn't LLVM, and we don't necessarily want to make every guarantee that it does. I'm personally not that happy that we defer to LLVM for many definitions for convenience, and I expect this to not be the case in the future e.g. if an actual spec is written. (Feel free to read it as if I said "introducing a more formal Rust undef, which is tracked at the bit level" in place of "expanding the notion ... bit level".)

On the point of undefined values, you're right that we just link to LLVM's definition of undef, but we do so in the context of reading undef memory, which doesn't say anything about an in-register value as we'd have for arithmetic, so even the most pedantic reading is vague. Also, I don't recall any team discussion featuring undef values where anything other than the whole value was considered undef. Summary: this is under-spec'd and the existing underlying/assumed sense of this area at the Rust level is almost certainly not for individual bits.

Of course, you're also right that using LLVM's undef can lead to definite results even without tracking bits (e.g. let x = if undef { 0 } else { 0 }; will always give x == 0), but see the first sentence.

@mahkoh
Copy link
Contributor Author

mahkoh commented Dec 21, 2015

I'm personally not that happy that we defer to LLVM for many definitions for convenience, and I expect this to not be the case in the future e.g. if an actual spec is written.

You're already de facto guaranteeing the current behavior by having it work. Significant changes cannot be made without silently (!) breaking code which is the complete opposite of stability.

Even if you had reliable normative information, undefined operations that have behaved reliably for some time cannot always be made behave differently without causing many problems (e.g. signed integer overflow in C.)

But there is no reliable normative information and thus people have to rely on what works in the current implementation for just about everything. E.g. the only official information about the behavior of transmute that can be found is

Unsafely transforms a value of one type into a value of another type.

Both types must have the same size.

This doesn't even guarantee that the returned value has anything to do with the input value. Precisely because transmute is completely unspecified, the current implementation must be treated as normative.

The same applies to undef: memcpy works now and it works according to llvm so it has to continue to work.

@mahkoh
Copy link
Contributor Author

mahkoh commented Dec 21, 2015

For example, the following is discouraged by a lint but does not cause any problems:

fn f(&self) -> &Self;

fn f_mut(&mut self) -> &mut Self {
    unsafe { transmute(self.f()) }
}

At the same time there is other, unreliable information flying around that says that transmuting & to &mut is always undefined. Strictly speaking this is correct because transmute is mostly undefined, but if that's the argument then transmuting &mut to & is equally undefined.

The lack of any kind of information regarding transmute means that you have no way to distinguish between safe, dangerous, and forbidden operations. All of them are equally undefined.

@steveklabnik
Copy link
Member

Precisely because transmute is completely unspecified, the current implementation must be treated as normative.

I strongly disagree with this. We should be seeking to specify such things, not just accepting whatever random behavior happens to work in these corners.

@arielb1
Copy link
Contributor

arielb1 commented Dec 21, 2015

@mahkoh

There's a difference between what we specify and what we allow LLVM to assume.

The reason there is a lint about transmuting & to &mut is because that causes instant death by aliasing rule violations - we basically allow ourselves to add MIR optimizations that will destroy your code in that case.

We still have not really decided how much "instant aliasing death" is a thing - @thestinger preferred to have access-based aliasing rules and I think he's got a point - but we didn't specify that it was not a thing.

On the memcpy issue: the part of C's semantics that allows memcpy to be implemented in C code is quite ugly. I would prefer not to specify anything like that.
sne point that I would like to make clear is the difference between well-defined, unspecified and undefined behaviour:

  • well-defined behaviour refers to things that ordinary programs are allowed to assume. The compiler must not break it.
  • undefined behaviour refers to things that the compiler is allowed to assume (to not happen). All programs must not try to provoke it.
  • unspecified behaviour refers to things that are not a part of the specification. The compiler is not granted a carte blanche to abuse them, but neither do applications.

There might very well be some other specification that defines some unspecified behaviour (either to something well-defined or undefined). For example, system calls are specified by your favourite OS's documentation.

At this moment, we have no plans to publicly specify everything that is needed by a stable libcore. This means that anyone who wants to implement one (currently that's either us or you) must look at the compiler source for a specification and coordinate with us to avoid breakage. This of course does not mean that undefined behaviour is triggered - that would give the compiler a license to destroy all Rust code. There are boundaries, they are just somewhat unclear. I am sorry that this interferes with your project.

@nikomatsakis
Copy link
Contributor

@mahkoh

You're already de facto guaranteeing the current behavior by having it work. Significant changes cannot be made without silently (!) breaking code which is the complete opposite of stability.

This same reasoning can be used to argue that, e.g., we should never change our sorting algorithm, because it may invoke the comparator in a different order, and so forth. We've also made it clear that various low-level details are expected to change, and that authors of unsafe code (in particular) will need to track the language as it evolves.

That said, we should definitely consider "common practice" when deciding what kinds of things are undefined behavior. This is only partially because of existing code -- what I am most concerned about is just that if the rules are too complex and abstract (that is, too divorced from some abstract model of how the machine operators), people won't be able to keep them in their head, and so they will write noncomformant code that does surprising things when optimized.

From what I can see, C has this problem in spades. Infinite loops, TBAA, etc all lead to making it actually surprisingly hard to write "correct" C code that does anything clever. But of course people write all kinds of clever things in C, many of which are compiler issues waiting to happen.

I think @mahkoh has a point that it would be nice to affirm that particular idioms (e.g., a naively written memcpy that "seems right") will work without leading to undefined behavior. I'm just not sure if that's an urgent priority: it's a rather complex equation, since we must also consider what LLVM will do (and to what extent we can control that), and so forth, and we don't want to wind up guaranteeing too much. Put another way, I am sympathetic with the aims of this RFC, but I also wonder if it would be better to try to tackle the problem of "stabilizing" unsafe code patterns in a more wholesale fashion, rather than going at it piecemeal.

@mahkoh
Copy link
Contributor Author

mahkoh commented Dec 21, 2015

@arielb1

The reason there is a lint about transmuting & to &mut is because that causes instant death by aliasing rule violations - we basically allow ourselves to add MIR optimizations that will destroy your code in that case.

There is no lint against transmuting (&, &) to (&mut, &mut) and the manual only talks about llvm's aliasing rules that must not be violated. If you're saying that you might break such things freely in the future, then this certainly doesn't just affect "my project".

On the memcpy issue: the part of C's semantics that allows memcpy to be implemented in C code is quite ugly.

There are many special cases for char but apart from that I don't recall anything particularly ugly. It would certainly be better if memcpy did not have to be written with u8's and then rely on LLVM to optimize it.

The point that I would like to make clear is the difference between well-defined, unspecified and undefined behaviour:

I disagree with your definitions. Here are mine:

  • undefined behavior is behavior upon an operation that has not been explicitly defined by the specification
  • unspecified behavior is behavior that depends on the implementation and that the implementation need not specify
  • implementation defined behavior is behavior that is defined by each implementation

With these definitions, undefined behavior is what you called unspecified behavior. I think the C++11 standard agrees with my definition:

undefined behavior
behavior for which this International Standard imposes no requirements
[ Note: Undefined behavior may be expected when this International Standard omits any explicit definition of behavior ...

There is a significant difference between undefined behavior and unspecified behavior so we have to agree on what we're talking about.

At this moment, we have no plans to publicly specify everything that is needed by a stable libcore

I don't think anything in this issue is restricted to code in a libcore. In fact, libcore doesn't contain a memcpy so I'm not sure how libcore is related to this issue. A memcpy might be written in many situations: when you write a kernel; when you need a particularly optimized memcpy; when you need a memcpy that can be inlined; etc. And transmutes are certainly used in lots of code.

@nikomatsakis

We've also made it clear that various low-level details are expected to change, and that authors of unsafe code (in particular) will need to track the language as it evolves.

I don't recall this and breaking random unsafe code seems to go completely against the rest of your stability guarantees. Please link to the text where you said this.

@mahkoh
Copy link
Contributor Author

mahkoh commented Dec 21, 2015

@nikomatsakis

More specifically: I'll be greatly surprised if you've actually said that authors of unsafe must track the language or else their working code might break without a compiler warning or error, which is what this issue is about.

@aturon
Copy link
Member

aturon commented Dec 21, 2015

@arielb1
Copy link
Contributor

arielb1 commented Dec 21, 2015

@mahkoh

That there is no lint against something only means that there is no lint against it, but we may be forced to grandfather some way for (&,&) -> (&mut,&mut) to work (the optimizations we want to do here is that if rustc gets to inline both f and f_mut then it may move some reads in f to ahead of the return). IIRC C got its memcpy rules in basically this way.

The "memcpy hack" is basically that the representation of types is somehow both undefined and well-defined at the same moment. I don't really want to have that hack in Rust.

With these definitions, undefined behavior is what you called unspecified behavior.

Compiler writers have traditionally taken "imposes no requirements" to mean that they are allowed to make the program do whatever they want in that case, which is basically equivalent to being allowed to assume that it does not happen (because if they assume wrong, something happens, which satisfies the empty set of requirements imposed).

A memcpy might be written in many situations: when you write a kernel; when you need a particularly optimized memcpy; when you need a memcpy that can be inlined; etc.

In that case you would want to write your memcpy in assembly or LLVM IR and use the ABI specification to communicate.

And transmutes are certainly used in lots of code.

That is certainly a very big problem. C's strict aliasing rules are a pretty similar rat's nest, but we need to do something to get out of it.

I don't recall this and breaking random unsafe code seems to go completely against the rest of your stability guarantees. Please link to the text where you said this.

LLVM can already randomly break unsafe code by becoming smarter about exploiting some UB. We only reserve the right to do similar things on our side.

@mahkoh
Copy link
Contributor Author

mahkoh commented Dec 22, 2015

but we may be forced to grandfather some way for (&,&) -> (&mut,&mut) to work

Why are you forced to keep this working but are free to break & -> &mut? Don't tell me it's because of some random lint that can be easily circumvented by going (&,) -> (&mut,).

The "memcpy hack" is basically that the representation of types is somehow both undefined and well-defined at the same moment.

I'm not sure what you're going on about some hack. There is no hack except for what I already mentioned regarding chars.

Compiler writers have traditionally taken "imposes no requirements" to mean that they are allowed to make the program do whatever they want in that case, which is basically equivalent to being allowed to assume that it does not happen

I've not said anything contradicting this. The point was that that what you categorize under "unspecified behavior" is already "undefined behavior" and that for something to be unspecified it has to be explicitly mentioned in the spec. I realize that this might be confusing so let me refer you again to the definitions of those terms in the C++11 standard.

In that case you would want to write your memcpy in assembly or LLVM IR and use the ABI specification to communicate.

At the same time you're telling people that matching on an empty enum is the official way to get an llvm unreachable instruction. Who made it official? Can you point me to the official documentation containing this? If anything, this is even less valid than writing your favorite memcpy in rust code.

[transmutes are] certainly a very big problem.

Maybe people should just write them in assembly or LLVM IR.

C's strict aliasing rules are a pretty similar rat's nest, but we need to do something to get out of it.

Neither LLVM nor Rust use TBAA which is the main source of UB related to C's aliasing rules. How is this in any way related to the current discussion?

LLVM can already randomly break unsafe code by becoming smarter about exploiting some UB. We only reserve the right to do similar things on our side.

So you only reserve the right to break just about everything because just about everything is UB in rust (see above). LLVM can do this because they actually have a decent amount of documentation allowing people to write code without having to rely on UB.

@arielb1
Copy link
Contributor

arielb1 commented Dec 22, 2015

Why are you forced to keep this working but are free to break & -> &mut? Don't tell me it's because of some random lint that can be easily circumvented by going (&,) -> (&mut,).

"grandfather" = be required to figure out some way to not break it because there are already programs using it and we don't want to release Rust 2.0. Maybe we will be forced to grandfather (&,) -> (&mut,) too, and maybe that will flow out naturally from the previous one, but I strongly prefer to minimize the amount of hacks required to make existing code work.

I'm not sure what you're going on about some hack. There is no hack except for what I already mentioned regarding chars.

C requires that every data structure have a representation as an array of integers (characters) in a round-trippable way, while making that representation basically "undefined" in many ways. That frustrates "symbolic" implementations. I would prefer that Rust's specification be implementable symbolically (note that this does not mean that all the code in libcore works symbolically - libcore is allowed to rely on unspecified behaviour).

The point was that that what you categorize under "unspecified behavior" is already "undefined behavior" and that for something to be unspecified it has to be explicitly mentioned in the spec.

The C specification tries hard not to have any things that are not defined anywhere. On the other hand, the precise sequence of assembly instructions emitted by a C compiler is not defined in any place, but saying that it is something the compiler is allowed to assume makes no sense. When we improve our spec, we should try to make sure that all cases of "in that other case, behaviour is unspecified" are explicitly stated.

At the same time you're telling people that matching on an empty enum is the official way to get an llvm unreachable instruction. Who made it official? Can you point me to the official documentation containing this? If anything, this is even less valid than writing your favorite memcpy in rust code.

Rust's codegen is quite explicitly unspecified. Even intrinsics are basically "emit the designated instruction, along with all necessary wrappers", so specifying that something lowers to exactly an unreachable is basically impossible (especially because unreachable is UB, so from an operational specification point of view we can emit everything we like). The "official" part basically meant that this is basically the Rust-Team-sanctioned way of generating an unreachable.

Maybe people should just write them in assembly or LLVM IR.

The issues caused by transmute are mostly the values created that don't actually inhabit their types. Doing the transmute itself in assembly would not help. If you can hide your transmute behind an API/ABI boundary, then there is no problem with writing that code in Rust.

For example, you can write f_mut safely as

fn f_mut(&mut self) -> &mut Self {
    unsafe { &mut *(self.f() as *const Self as *mut Self) }
}

At least, that is supposed to be non-UB.

Neither LLVM nor Rust use TBAA which is the main source of UB related to C's aliasing rules. How is this in any way related to the current discussion?

Rust has lifetime-based alias analysis, which has the same "utter the right incantations to guard against the evil optimizer" problems as TBAA.

So you only reserve the right to break just about everything because just about everything is UB in rust (see above). LLVM can do this because they actually have a decent amount of documentation allowing people to write code without having to rely on UB.

Rust's specification is in a very sorry state, with many things left unspecified. Incidentally, what is important for future compatibility is not the specification but rather the stability guarantee, which explicitly says that underspecified areas are not stable and can change between releases.

This means that code using these underspecified areas (including type punning) is unfortunately subject to breakage between releases. We try to avoid causing silent breakage, but we prefer that our users will be careful around these areas.

@mahkoh
Copy link
Contributor Author

mahkoh commented Dec 23, 2015

we don't want to release Rust 2.0

Seems like a reasonable idea. Write a spec, break all the unspecified things you want (but not more), release 2.0. It's not like an increase of the major has to break lots of things. I'd be fine with it breaking unsafe code as long as I get a real spec in return.

Maybe we will be forced to grandfather (&,) -> (&mut,) too, and maybe that will flow out naturally from the previous one, but I strongly prefer to minimize the amount of hacks required to make existing code work.

It seems that if you keep every transmute of &->&mut working in tuples and structs, then actually breaking the only one you're "allowed to" (which is questionable) is more work than keeping that one working to.

C requires that every data structure have a representation as an array of integers (characters) in a round-trippable way, while making that representation basically "undefined" in many ways. That frustrates "symbolic" implementations.

I see what you mean but I don't consider this a hack.

I would prefer that Rust's specification be implementable symbolically (note that this does not mean that all the code in libcore works symbolically - libcore is allowed to rely on unspecified behaviour).

Feel free to mention your concerns in #30407. While your idea is appealing on a theoretical level, I don't think this is realistic for a systems language, making it harder to write low level code (allocators, kernels, etc.) in rust. Calling stable rust a systems language is already questionable (it fails the simple test that a systems language can, theoretically, compile itself: stable rust requires language items, stable rust will never be able to compile language items; that is, a stable rust compiler cannot even theoretically be a self-hosting compiler), and this idea makes even nightly rust less systems-y.

But, like I said, I see the value of your idea and I think a theoretical spec could very well keep the representation completely unspecified. But at the same time, the rustc documentation has to extend said spec to specify parts of the representation. Code that relies on such details is then of course not portable between implementations.

On the other hand, the precise sequence of assembly instructions emitted by a C compiler is not defined in any place, but saying that it is something the compiler is allowed to assume makes no sense.

I'm not sure what you're saying here. Of course the compiler is allowed to assume that the sequence of assembly instructions is not defined. Otherwise it could not perform any optimizations.

The C standard describes the behavior of the abstract machine. An implementation is allowed to handle the details in any way it wants as long as the observable behavior agrees with the one described in the standard.

The "official" part basically meant that this is basically the Rust-Team-sanctioned way of generating an unreachable.

I think there isn't really a difference between "official" and "sanctioned". As long as it's not written down somewhere, it's no more than hearsay. If such a thing has actually been discussed and agreed on, then write it down where everyone can look it up so that we can properly language lawyer once you break it.

For example, you can write f_mut safely as

Now we're getting somewhere! I assume that by "safely" you mean that your way is "sanctioned"? If so then I'm surprised because one would think that your way is more dangerous than the transmute.

unsafe { &mut *(self.f() as *const Self as *mut Self) }
                            ^

I'd assume that, at the marked point, the borrow has been "released" and that self becomes accessible again. So that at the following point

unsafe { &mut *(self.f() as *const Self as *mut Self) }
         ^

we've created two live mut pointers to the same address and that the second reference has an unbounded lifetime. The transmute version doesn't seem to have this problem since it goes directly from &'anon to &'anon mut without releasing the borrow. But if you promise that your version will continue to work then I'll happily replace all transmutes with it.

Edit: See also #30424 which is closely related.

At least, that is supposed to be non-UB.

There seem to be lots of things related to the interaction between pointers and references that are completely unspecified. It's one of the things mentioned in @aturon's link.

which explicitly says that underspecified areas are not stable and can change between releases

The text linked by @aturon uses lots of qualifiers to restrict this freedom. And references in particular are heavily specified by the following line in the documentation:

&mut and & follow LLVM’s scoped noalias model

Which links to LLVM's docs which have lots of text describing noalias.

@arielb1
Copy link
Contributor

arielb1 commented Dec 23, 2015

It seems that if you keep every transmute of &->&mut working in tuples and structs, then actually breaking the only one you're "allowed to" (which is questionable) is more work than keeping that one
working to.

We will find some reasonable semantics. We should try not to break code in practice, and to allow an upgrade path from what we break. Under that constraint, we should try to have the semantics as clear as possible.

Feel free to mention your concerns in #30407. While your idea is appealing on a theoretical level, I don't think this is realistic for a systems language, making it harder to write low level code (allocators, kernels, etc.) in rust.

Clearly we need a "low-level Rust" specification in addition to the "high-level Rust" specification. Obviously we need them - our high-level specification does not talk about ABIs at all. However, I don't see much value in allowing the C standard memcpy to be legal Rust, especially because it is typically written in assembly.

Calling stable rust a systems language is already questionable (it fails the simple test that a systems language can, theoretically, compile itself: stable rust requires language items, stable rust will never be able to compile language items; that is, a stable rust compiler cannot even theoretically be a self-hosting compiler), and this idea makes even nightly rust less systems-y.

rustc does not require any lang-items. I don't see how this situation is qualitatively different from libc using linked assembly files for various system call stubs.

I think there isn't really a difference between "official" and "sanctioned". As long as it's not written down somewhere, it's no more than hearsay. If such a thing has actually been discussed and agreed on, then write it down where everyone can look it up so that we can properly language lawyer once you break it.

It is basically at the level of official hearsay. We are not willing to document performance characteristics at any level beyond that. Optimizers, both ours and LLVM's, can generate whatever code they feel like as long as it functions correctly. We try to make them generate fast code for things people write, especially the "officially sanctioned" ways, but we are not willing to promise anything.

As an analogy, sysenter/sysexit is Intel's officially sanctioned way of making system calls on x86-64: Intel tries very hard to make it fast, but I am sure that if you misconfigure your processor correctly you can make it slow, and anyway Intel does not give any performance guarantees for it.

@Aatch
Copy link
Contributor

Aatch commented Dec 27, 2015

Is there a point to all of this? It just seems like an excuse to complain about things.

Ultimately, in the absence of an actual spec, the only thing we can go on is common sense and current behaviour. The corners are where common sense fails and current behaviour only works via luck. However, until we get a spec, there's no point in arguing about stuff like this.

I'm in favour of just closing this issue unless some actionable issue is presented. We have other channels for this kind of discussion.

@arielb1
Copy link
Contributor

arielb1 commented Dec 27, 2015

@Aatch

I think we should have some organized place for tracking the Rust memory model mess.

@huonw
Copy link
Member

huonw commented Jan 5, 2016

As @Aatch says, there's nothing really actionable here: spec-ing this sort of thing is the realm of an RFC, since there's design decisions to make and tradeoffs to be considered (e.g. http://www.playingwithpointers.com/problem-with-undef.html). Therefore, I'm closing.

@huonw huonw closed this as completed Jan 5, 2016
@pnkfelix
Copy link
Member

pnkfelix commented Jan 6, 2016

@huonw would you mind opening an RFC issue for this and linking it here? As Ariel said, we should have some central place to discuss this

I'd do it but I'm on a mobile device for the next few hours

@pnkfelix
Copy link
Member

pnkfelix commented Jan 6, 2016

okay i opened an RFC issue for Rust needing a memory model; cc rust-lang/rfcs#1447

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants