Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add devcontainer #576

Open
wants to merge 2 commits into
base: dev
Choose a base branch
from
Open

Add devcontainer #576

wants to merge 2 commits into from

Conversation

trdthg
Copy link
Contributor

@trdthg trdthg commented Dec 10, 2024

Description

resolve #560

  • Dockerfile: install necessary packages, zsh, oh-my-zsh
  • devcontainer.json: add some recommand vscode extensions
  • Makefile: setup ctg,isac,riscof, sail, rv32/64 toolchain

Usage:

Note:

I did not put the setup content into the Dockerfile because we cannot reliably obtain all dependencies through apt install at the moment, and users may want to do some customization (like remove/update/custome things) before installation

  • Need to download a lot of things,mostly toolchains. If your network is poor, the experience may be very bad, so I broke down each step into a seperate target of make
  • Currently we cannot obtain the precompiled binary of sail-model, and we need to manually clone and compile it, I placed them under /workspaces, at the same level as the act repository
  • riscof is installed directly via pip now, but I want to clone it locally as we sometimes need to modify it. (Why not merge riscof into the act repository as well?)

I have tested the entire process locally, and the experience is very good

Copy link

@pbreuer pbreuer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, first of all you need to list the four files in this (patch?) and say what they are and what they are for.
The shell file is a one-liner, does something with git, and therefore I have no idea what it is for or why anyone would want it. Looks like it sets some parameters. As a shell script it is missing the bang line at top (#! /bin/sh). It is missing an author claim so people know who to blame and ask (blah, blah, licence, blah) and the biggie is teh lack of an explanation of what it is/is for. To me it looks dangerous! It mentions "/workspaces" which does not exist on my machine, and is a root directory! Oww!
There is a short json file, again I have no idea what it is for, and it needs to say what it is, who made it, and why. It mentions the shell file so it is presumably some link between something and something ... likely a description of the shell file for some gui, as it mentions an X display number (that's not a number of a display I am working on!)
There is a "docker file" which seems to contain four commands to run in sequence in order to install support infrastructure for building something (what?). Seem to be commands with apt for ubuntu to pull some ordinary build stuff, then a one liner to pull with wget a shell script that maybe is some decorative setup for a zsh running "in" docker, if that has a gui, as I suppose it does.
WHAT is all this for? It seems aimed at a gui, which is something I would not use. The idea in open source is that you engage with source code, not distance yourself from it via a gui!
FINALLY .. there is a Makefile! Hurray. This is the ONLY useful item. You need to describe what it is for, who is the author, etc. It should set parameters at the beginning and follow with rules later. Everything is all mixed up here (aka "unreadable"!).
The first thing it does is set PATH, which is completely unacceptable, because that must be configurable to a users taste. I imagine docker has some fixed paths and these are they, largely.
At this point you should ask yourself what good this is doing you. People can run makefiles on their own! They just type "make"! WHY would anyone need to run it inside docker? Please explain! In the first place one must not install stuff on one's machine that is not under the control of the system's installation and configuration manager, and this is doing just that, apparently. It is entirely unacceptable! Unless ... docker can figure out what the distro is, and build packages for it on the fly, and install them. Can it?
That would be extremely unreliable - its decisions would inevitably conflict with the distro's packagers own, so not likely.
Sigh .. you need to determine what this is FOR. There is no difficulty in downloading stuff from the riscv area and putting in all in some directory in /usr/local/src, for example. The problem then is that all that stuff has been developed by people who seem to have little idea how to code, or how to develop code that is installable and maintainable (two sides of the same coin!). You need to HELP solve that, so you should be doing something that adds the intelligence and knowhow that they have lacked. You need to have a makefile that build what they want to build, yes, but WITHOUT messing up your machine with all these extraneous local extra packages etc. You will say that docker manages its own area, but I at least want no area that is not under the control of the package manager on the machine. The question is how to integrate with that.
The simplest method is to mount a transparant file system over the real file system in a sandbox, build whatever you need with docker in there, install it into /usr/local in the transparant system, then take a tar of the binary installation, move the tar out of the transparant file system and step out of the sandbox, destroy the sandbox and all the docker stuff, and then convert the tar to a distro package using alien or whatever you prefer, then install with the distro package manager. If you like you can do without the transparant file system mount, there are plenty of applications that replace "install" with a script that logs where things go and you can make a list of things to tar in /usr/local from that.
But the problem with that and the similar method via docker that it looks like you are proposing is that you are not adding any intelligence or knowledge, so it's got no content. I don't think you will know what the various things you are installing do or where they put things or why or how they make them. That is the knowledge that one wants added via a makefile or other tool. It should organise all that with purpose and design, and that can't be done without knowing.
But to look at the makefile you have supplied, exeunt the disorganisation, it says it runs some setups for a variety of things. You need to explain what is meant by "setup", and what it does ... I have no idea! Please do explain.
You also need to explain what the things are it does setup for, and allow someone to modify all this intelligently with that information to hand via appropriate comment.
You then generate some sense of what of those things is installed via running "command", which is a Ubuntu-only thing and WILL NOT WORK anywhere else, so it is useless. Didn't you just download all these things via docker anyway? So why are you testing? Or is this stuff that docker didn't download and is about to become a victim of the stuff that was downloaded? I suppose so! You do know that "./configure" is generally used to discover what is available, and that will generate a Makefile set up accordingly? You likely don't want the Makefile itself to do discovery.
Actually, if the things whose presence is tested for aren't available, the Makefile seems to try to run "install" on some things using pip. Well! that's what the instructions say to do on the riscv site! Is the Makefile only intended to save people the trouble of reading?
So far the only thing that has happened is that docker has build pip (I suppose) and pip is now building whatever the various things you want installed are, somehow, and I at least am no wiser as to what is going where or what it is for.
Please put lots of writing in to explain what the users choices are and what the consequences of those choices are.
Actually, as far as I can see, no actual building is done? Just whatever they are are got via curl or pip (I don't know what pip does, but I imagine it gets stuff from remote python repositories). Why is that a help to anyone? I can do that!

What one needs is help in building whatever those things are, and help in choosing whether one wants them or not, and help in configuring them to go in the right places and integrating them into the installed system. And that should be done without adding to the system, or modifying it in any way. Provide explanations that allow for informed choice and leave that to the user.

It doesn't help me. Choose one thing to help install, explain what it does, figure out how it can be built in a standard fashion without whatever weirdism the author has misconceived, and do it. For bonus points record where it put things, build a binary tar out of it and convert to the distro package with alien and install that post-hoc.

How about telling ME what those ref.elf.dump files are supposed to be?

@trdthg
Copy link
Contributor Author

trdthg commented Jan 6, 2025

Thank you very much for your reply. I have never had such a detailed discussion, I need some time to carefully consider these issues

@trdthg
Copy link
Contributor Author

trdthg commented Jan 6, 2025

How about telling ME what those ref.elf.dump files are supposed to be?

Actually I don't quite understand what you mean by ref.elf.dump, how did you get it or where is the doc that mentions it?

if you run riscof coverage you will get

tree riscof-plugins/rv64_cmo/riscof_work/rv64i_m/CMO/src/cbo.zero-01.S
riscof-plugins/rv64_cmo/riscof_work/rv64i_m/CMO/src/cbo.zero-01.S
├── cbo.zero-01.log
├── coverage.rpt
├── ref.cgf
├── ref.disass
├── ref.elf
├── Reference-sail_c_simulator.signature
└── ref.md

if you run riscof run then

tree riscof-plugins/rv64_cmo/riscof_work/rv64i_m/I/src/add-01.S
riscof-plugins/rv64_cmo/riscof_work/rv64i_m/I/src/add-01.S
├── dut
│   ├── DUT-spike.log
│   ├── DUT-spike.signature
│   └── my.elf
└── ref
    ├── add-01.log
    ├── ref.disass
    ├── ref.elf
    └── Reference-sail_c_simulator.signature

@pbreuer
Copy link

pbreuer commented Jan 6, 2025 via email

@allenjbaum
Copy link
Collaborator

allenjbaum commented Jan 6, 2025 via email

@pbreuer
Copy link

pbreuer commented Jan 6, 2025 via email

@allenjbaum
Copy link
Collaborator

allenjbaum commented Jan 6, 2025 via email

@allenjbaum
Copy link
Collaborator

allenjbaum commented Jan 6, 2025 via email

@pbreuer
Copy link

pbreuer commented Jan 7, 2025 via email

@pbreuer
Copy link

pbreuer commented Jan 7, 2025 via email

@jordancarlin
Copy link
Contributor

Hi @pbreuer. I'm one of the main contributors to wally and have done a lot of the work on our tool flow, so hopefully I can shed some light here. I'm not sure who you were talking to from the wally team before, but if you have further questions specific to wally the best way to get in touch with us is by opening an issue or discussion in the cvw repository. We monitor everything opened over there pretty closely.

Starting off with a high level overview, the main purpose of riscof is to run the tests already in the riscv-arch-test repository (this repo). Each test in the repo has a string that describes which RISC-V extensions are necessary for it to run. Riscof takes a configuration file as an input that defines which "plugins" to run the tests on. One of these is designated the "reference model" (usually Sail or Spike) and the other the "DUT" (device under test). Additional configuration files pointed to by this main config file define relevant architectural aspects of the DUT, most importantly which extensions are supported. Riscof then uses this information to determine what subset of the tests should be run on the model (based on the provided list of supported extensions), compiles each of the tests, runs them each on both selected plugins (the reference model and the DUT), and compares the final signature that they dump out. Each plugin has a python file that tells riscof what command to use to compile tests for it and what command to use to run a test on it. The tests are each designed so that all of the architecturally important information that it is testing for ends up getting stored to a particular region of memory (dubbed the "signature region"). At the end of the test, this signature region of memory is dumped to a file (the specific means of doing this dump are plugin dependent and are part of the previously mentioned plugin python files) so that riscof can compare the two.

Riscof is capable of several other things (like measuring coverage), but as previously mentioned, this is more relevant for test development and is not necessary if you are just trying to run the riscv-arch-tests.

In the case of wally specifically, we do things slightly differently. We've found that running the tests through Sail and having riscof do the signature comparison is slower than we would like, especially considering how often we end up running the tests. To get around this we have riscof run with Sail as the reference model and Spike as the DUT. At the end of this it dumps out the signature region from Sail, which should be the officially correct and expected results from the test. We then use our own Makefiles to convert the compiled elf files into hex memfiles that get read into our Verilator simulations by our testbench. At the end of the test, the testbench reads in the expected signature region generated earlier and compares it to the actual state of wally's memory directly in the Verilog of the testbench. This avoids recompiling and regenerating all of the tests and signatures every time we want to run the tests of wally.

The different testing methodology that @allenjbaum was referring to with internal state and the RVVI interface is something that we use for a different set of tests and is not relevant for the riscv-arch-tests.

All of the relevant files for this for wally are in the tests/riscof directory (other than the testbench). The Makefile has the actual riscof commands we use along with all of the flags. The main config file is called config.ini and is in that directory too. There are subdirectories that contain all of the spike plugin and sail plugin specific files.

Hopefully this helps a little and feel free to reach out more here for general riscv-arch-test questions or over on the cvw repo for wally specific questions.

@allenjbaum
Copy link
Collaborator

allenjbaum commented Jan 7, 2025 via email

@pbreuer
Copy link

pbreuer commented Jan 7, 2025 via email

@allenjbaum
Copy link
Collaborator

allenjbaum commented Jan 7, 2025 via email

@jordancarlin
Copy link
Contributor

The trouble I had is that the wally people I talk to seem to know "nothing" about software, not their software or anyone's, so can't tell me anything about what is supposed to be needed, or how it works, or anything. Somebody probably just got it to work, somehow, sometime, and nobody knows now what it was is - it's just a button to press.

Please take a look at my response above for some details on Wally’s use of all of this. If you have further Wally specific questions please ask them on the CVW GitHub repository. Either open an issue or discussion. I don’t see any posts over there from you.

@pbreuer
Copy link

pbreuer commented Jan 15, 2025

I've just come back to have a look at what's going one here. I have no idea what the above is about but it appears to be a testing engineer's upside down view of the universe - FYI I have written several processors, have read the RISCV and Open RISC specs before that, referee for the IEEE and ACM on related subjects, including for the flagship journals of each, and could not personally care less about RISCV compliance or anything else like that, but proved working to a proper formal spec, yes, and the RISCV specs are not at the required level for that, and testing concrete cases is not the way to discover if they do comply, randomly generated or not within various abstract cases or not, and I (literally) wrote the book on the formal semantics of VHDL and other HDL languages, am the author of too many articles in computer science and software engineering and mathematics to count, am the author of compiler compilers, static analysis, other formal methods tools and theoretical foundations, logic compilers, and goodness knows how many linux kernel modules, have been on the faculties of two of the top 10 world rank universities, including the current number 1, so unfortunately for your thesis, I am the one who knows what is what here, and there appears to be a problem known as Dunning Krueger syndrome arising, in which people like bank managers and company directors are too used to being knowledgeable about their splinter specialism to know the bounds of their knowledge do not extend beyond it, and incorrectly assume they do.

Bad karma for the involved.

Now when you have swallowed some of that weird stuff you've produced back, as far as I recall I was asking what the introverted jargon that was being flung about meant, and was getting gibberish back. As I recall I needed to know how riscof was involved in wally and what it WAS.

The answer appeared to be that it is a suite for generating compliance tests, and possibly evaluating them, and is NOT involved in wally in any way, other than that the folks appear to at one time have generated or begged borrowed or stolen a set of compliance tests that appear in a directory called "riscof".

Having eventually wormed out from this conversation what riscof is and deduced that it is not something I need to pay any attention to, I was able to ignore whatever mess the wally team have produced with respect to it, simply compile the assembler tests into executables, which I am currently running my rewrite of the execrable wally code on.

I have meanwhile had to figure out how to run verilator, have debugged it to find it only pretends to support $dumpvars, and developed a patch for that and am now getting sensible outputs. It appears my rewrite to simplify the wally code is successful since linear programs are executing OK, apart from the odd bug (hem, hem, OK, so addresses appear to be 4x the right value, which explains why jumps aren't that great right now), which I can now chase down, now debugging is working properly. Since the wally team apparently consist of graduate students who know no software, presumably helped at various times by transient software engineers who have known no hardware and are no longer around, the result is predicatable and enquiries about what to do with their code have come up against the usual problems of dealing with people who use a GUI - they have no idea what is going on beyond the pictures they think in and the GUI they use, which I don't and won't.

If you can manage to produce some satisfactorily high level and mathematical communication about what riscof is and does, I might be interested in seeing it. Something like "We have developed an abstract specification of the state S of a RISCV machine in terms of what its registers, memory, etc is, and have developed a description of the (operational? Say what kind) semantics of each RISCV instruction (say in what formal setting, it sounds like maybe just predicate logic formalized for a theorem prover, but be specific) as a predicate describing the set of graphs in SxS that instruction may execute." and "we have partitioned that space SxS for each predicate p according to the structure of the predicate, using the theory developed by ...". Etc.

In other words, communication. Not blowhard nothings. It's like seeing a convention of plumbers believing that what they are doing is fluid dynamics. Interesting ... for psychologists!

I'd be happy to help you develop documentation that communicates. It would tell me what the funny words you are using mean, which is always useful for talking to sects and cliques. What I have seen in the way of documentation was hopeless - it commits the standard error of engineers doing writing and japanese washing machine manual doing japanese washing machine manuals of assuming the reader already knows what they are talking about, and starting there. No. The reader knows nothing about anything. One has to start there, at nothing, and take the reader on a journey to something, making sure to provide frequent checkpoints at which the reader can confirm their synchronization and mend it as necessary.

You should take my questions, answer them, and incorporate them. The first sentence is "Riscof is a suite of software that generates test programs to be run on candidate machines in order to evaluate compliance with the published RISCV standards ...".

Simple! Not hard to say at all. Why the problem?

I bet you've never used more that 2 of the settings on your washing machine! I certainly have no idea what the funny symbols and numbers on the dial on mine mean! Why anyone should think I would is the mystery.

If you are also under the illusion that I should trouble myself to learn about riscof first in order to honor your work on it, the answer to that is no. Why should you trouble to learn about my areas of achievement in order to ask me questions about those? If I can't answer you communicatively then, to quote Einstein (again?) "If [I] can't explain it, [I] don't understand it sufficiently", and I should go away and improve myself at that.

Please ask someone in your environment about such matters.

I don't recall now if I compiled riscof. I think it was python? If so, you need to take on board that people cannot use the python installer on a unix distribution for a personal machine, because it will compete with the native package and installation manager. The python installer (was it called "pip"? Something like that) is not provided - deliberately! - in the distributions, and none of the pythonesque methods of compiling it against the installed python are likely to work. I gave up, and that is saying something! Downloading a python script to do the job is safer, but the script one downloads will (rightly!) detect that it is not in control of the python installation and refuse to run.
About all one can do is make a personalized installation, or a local one, neither of which I will do because they will have the effect of causing incompatibilities at a later stage with distribution upgrades. It's simply a no-no, in any csase. One cannot afford to lose track of what is on ones machine, and a local or personal installation is precisely something that is not tracked, so no.
No, I will not make a personal sandbox in which to run it. I just said that.
You will have to learn how to write the information required for a debian subdirectory. That can then be used by anyone to make a debian package, which they can convert to any other package format for their distribution, and then they will install it properly under the control of the installed package manager.
That is simply what you have to do to get to ground zero in making open source available. It's about giving people access, and if they can't install it (see above) then there is no access and you're whistling in the dark. I'm not going to install software that the writers aren't even up to the job of making a debian package for! I'd be crazy .. and I don't think I am.
I seem to recall something about some version of a python module you required with an "=" specification being too OLD, so nobody can install it nowadays. Please revise your code to make sure it does not rely on that specificity, and reissue with looser specifications.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Create Github Codespace config for beginners
5 participants