Nevermind using such frivolous things as a file system.
Using a file system is much less bad than dynamically allocating memory, at least as long as you keep a predefined set of files.
I hate to alarm you but... What is a file system except dynamically allocated memory. ;)
It's a persistent dynamic memory allocation that's accessed by multiple processes! :)
multiple processes?!
FreeRTOS tasks are basically processes, IIRC other rtoses have similar mechanics too
If you want to get really freaky, try accessing the same flash or RAM from multiple instances of FreeRTOS running on a hypervisor.
Is that just like the shared memory model of parallel computing or are there any added complications?
Have you done this before? Please do share your experiences if so cause now I'm interested :p
It's similar, but the general idea of a hypervisor is to separate resources and avoid this exact situation (it's nuanced and there are some exceptions, but that's the general use case).
The added complication would be that when you compile a binary for one virtual machine, the compiler may optimize things, blissfully unaware that there are other players possibly affecting memory. In a typical multithreaded environment, the compiler has a better picture of how shared resources are being used across threads, but that has to be declared manually for a hypervisor. So if you configure your hypervisor to share resources, you have to be even more vigilant in configuring the individual compilers to play nice.
I don't have a ton of experience with embedded hypervisors, though. And it's worth noting that there are lots of "hypervisors" out there, and some work very differently from others.
Lots of microcontrollers have multiple cores now.
And indeed, with memory mapped files the distinction almost disappears completely.
a predefined set of files
...with predefined sizes located in predefined regions of storage.
Yeah, that's what I was implying, just didn't want to write a whole novel about it.
For real though I have wanted for years to know, the person that took this picture; what the hell did they say to get everybody to look like that?
It's not quite as interesting as you might hope:
On November 12th, 2012, YouTuber LifeAccordingToJimmy posted a video titled "Don't Stop the Music," a skit based on the awkward moments caused when the music stops at a party and a story one is telling is overheard by others. In the sketch, the music stops as the main character says something particularly strange, causing the partygoers to stare at him. The video gained over 4.2 million views (shown below).
Edit. Okay, I click the link before you edited it and I just watched the whole video and that was actually kind of funny but still not as cool as I hoped.
"Where are the blue cups?"
Years ago, older C programmers told me you don't know C unless you use dynamic memory management. I ended up rarely writing any C, but when I do, it's usually on microcontrollers where dynamic memory management isn't even supported out of the box.
Jokes on you, greybeards!
Though as a non-embedded dev who has interviewed embedded candidates I like to ask them to talk about the issues around C vs C++ for embedded and the first point 8 out of 10 of them make is C++ is bad because dynamic allocation is bad. And while they could expand to almost sort of make their point make sense, they generally can't and stumble when I point out it's just as optional in each.
Yeah, I get where they're coming from--in typical use cases, C is often used with static allocation (correlated with minimal/embedded devices) while C++ is often used with dynamic allocation (correlated with enterprise/GUI applications).
Of course you can use either for either purpose, but that pattern seems more common. That being said, I'd be concerned with applicants who don't understand that.
Can you give some examples of what you consider to be the issues?
My professor said that C++ embedded compilers used to be very buggy but have matured quite a lot as of ~10 years ago while C was stable a lot longer.
Another thing I could think of is the language complexity causing higher resource usage, e.g. by including large libraries though I'm not sure about that since most of the unused stuff should theoretically get optimized out.
I guess if you don't know roughly how the internals of some C++ data types work it could cause you to accidentally use dynamic memory allocation when using strings or vectors.
On the other side, C++ style casts provide more safety as compared to C style casts and allows for usage of references instead of raw pointers to make the code generally safer.
Joke's* on you
DON'T YOU DO IT!
DON'T YOU FUCKING DO IT!
KEIL ALREADY REQUIRES A BLOOD SACRIFICE YOU'RE KILLING US ALL YOU FOOL!
It's funny because it's true...
I was an embedded developer for years for critical applications that could not go down. While I preferred avoiding dynamically allocating memory, as it was much less risky, there were certainly times it just made sense or was the only way.
One was when we were reprogramming the device, which was connected to an fpga which would also require reprogramming. You couldn't store both the fpga binary and the new binary for the device in memory at once, but there was plenty of space to hold each one individually. So allocate the space for the fpga, program it, free and allocate space the new processor code, verify and flash.
What am I missing? Have things changed?
I'd effectively gain the advantage of dynamic allocation by using a union (or just a generic unsigned char buffer[16384] and use it twice). Mostly the same thing as a malloc.
You said it yourself:
While I preferred avoiding dynamically allocating memory, as it was much less risky, there were certainly times it just made sense or was the only way.
This is not a common attitude to have outside of embedded and similar areas. Most programmers dynamically allocate memory without a second thought and not as a last resort. Python is one of the most popular programming languages, but how often do you see Python code that is capable of running without allocating memory at runtime?
I guess I'm taking the meme too literally here and that people would be disgusted by it. While I think it's a common practice, but obviously to be used very judiciously.
Nevermind using such frivolous things as a file system.
Using a file system is much less bad than dynamically allocating memory, at least as long as you keep a predefined set of files.
I hate to alarm you but... What is a file system except dynamically allocated memory. ;)
It's a persistent dynamic memory allocation that's accessed by multiple processes! :)
multiple processes?!
FreeRTOS tasks are basically processes, IIRC other rtoses have similar mechanics too
If you want to get really freaky, try accessing the same flash or RAM from multiple instances of FreeRTOS running on a hypervisor.
Is that just like the shared memory model of parallel computing or are there any added complications? Have you done this before? Please do share your experiences if so cause now I'm interested :p
It's similar, but the general idea of a hypervisor is to separate resources and avoid this exact situation (it's nuanced and there are some exceptions, but that's the general use case).
The added complication would be that when you compile a binary for one virtual machine, the compiler may optimize things, blissfully unaware that there are other players possibly affecting memory. In a typical multithreaded environment, the compiler has a better picture of how shared resources are being used across threads, but that has to be declared manually for a hypervisor. So if you configure your hypervisor to share resources, you have to be even more vigilant in configuring the individual compilers to play nice.
I don't have a ton of experience with embedded hypervisors, though. And it's worth noting that there are lots of "hypervisors" out there, and some work very differently from others.
Lots of microcontrollers have multiple cores now.
And indeed, with memory mapped files the distinction almost disappears completely.
...with predefined sizes located in predefined regions of storage.
Yeah, that's what I was implying, just didn't want to write a whole novel about it.
For real though I have wanted for years to know, the person that took this picture; what the hell did they say to get everybody to look like that?
It's not quite as interesting as you might hope:
https://youtu.be/DHQEJoqKiBg?t=1m8s
Source
Oh shit you're right. That was really lame.
Edit. Okay, I click the link before you edited it and I just watched the whole video and that was actually kind of funny but still not as cool as I hoped.
"Where are the blue cups?"
Years ago, older C programmers told me you don't know C unless you use dynamic memory management. I ended up rarely writing any C, but when I do, it's usually on microcontrollers where dynamic memory management isn't even supported out of the box.
Jokes on you, greybeards!
Though as a non-embedded dev who has interviewed embedded candidates I like to ask them to talk about the issues around C vs C++ for embedded and the first point 8 out of 10 of them make is C++ is bad because dynamic allocation is bad. And while they could expand to almost sort of make their point make sense, they generally can't and stumble when I point out it's just as optional in each.
Yeah, I get where they're coming from--in typical use cases, C is often used with static allocation (correlated with minimal/embedded devices) while C++ is often used with dynamic allocation (correlated with enterprise/GUI applications).
Of course you can use either for either purpose, but that pattern seems more common. That being said, I'd be concerned with applicants who don't understand that.
Can you give some examples of what you consider to be the issues?
My professor said that C++ embedded compilers used to be very buggy but have matured quite a lot as of ~10 years ago while C was stable a lot longer.
Another thing I could think of is the language complexity causing higher resource usage, e.g. by including large libraries though I'm not sure about that since most of the unused stuff should theoretically get optimized out.
I guess if you don't know roughly how the internals of some C++ data types work it could cause you to accidentally use dynamic memory allocation when using strings or vectors.
On the other side, C++ style casts provide more safety as compared to C style casts and allows for usage of references instead of raw pointers to make the code generally safer.
DON'T YOU DO IT!
DON'T YOU FUCKING DO IT!
KEIL ALREADY REQUIRES A BLOOD SACRIFICE YOU'RE KILLING US ALL YOU FOOL!
It's funny because it's true...
I was an embedded developer for years for critical applications that could not go down. While I preferred avoiding dynamically allocating memory, as it was much less risky, there were certainly times it just made sense or was the only way.
One was when we were reprogramming the device, which was connected to an fpga which would also require reprogramming. You couldn't store both the fpga binary and the new binary for the device in memory at once, but there was plenty of space to hold each one individually. So allocate the space for the fpga, program it, free and allocate space the new processor code, verify and flash.
What am I missing? Have things changed?
I'd effectively gain the advantage of dynamic allocation by using a union (or just a generic
unsigned char buffer[16384]
and use it twice). Mostly the same thing as a malloc.You said it yourself:
This is not a common attitude to have outside of embedded and similar areas. Most programmers dynamically allocate memory without a second thought and not as a last resort. Python is one of the most popular programming languages, but how often do you see Python code that is capable of running without allocating memory at runtime?
I guess I'm taking the meme too literally here and that people would be disgusted by it. While I think it's a common practice, but obviously to be used very judiciously.