Re: “A damn stupid thing to do”—the origins of C [message #403104 is a reply to message #403101] |
Mon, 21 December 2020 13:37 |
Dan Espen
Messages: 3867 Registered: January 2012
Karma: 0
|
Senior Member |
|
|
scott@slp53.sl.home (Scott Lurndal) writes:
> Dan Espen <dan1espen@gmail.com> writes:
>> Jorgen Grahn <grahn+nntp@snipabacken.se> writes:
>>
>>> On Sun, 2020-12-20, Dan Espen wrote:
>
>>> ...
>>>> When objects go out of range, they get freed automatically. Objects
>>>> have destructors so you can embed all your cleanup in the destructor
>>>> and it gets invoked when you destroy the object or it goes out of
>>>> range.
>>>
>>> That's a good summary.
>>>
>>> And the reason I said "stays in the C tradition" is that it's still
>>> structs on the stack, or embedded in other objects, with well-defined
>>> lifetimes.
>>
>> Thanks.
>>
>> Where I work we had an interesting experience with C++ new vs C malloc.
>> I forget the exact numbers but with malloc, if you malloc a single byte,
>> there's overhead but I believe it's something like 4 bytes for thousands
>> of 1 byte mallocs. Every new takes at least 4 (could have been 8)
>> additional bytes for every single new. So, this application programmer
>> came to me and couldn't figure out why his program converted to C++ was
>> running out a memory. That's when I dug through some control blocks to
>> figure out what was going on.
>>
>> So, warning, if you are using new for lots of small memory chunks, watch
>> out.
>
> The g++ default implementation of the C++ 'new' operator simply calls malloc
> on linux.
>
> malloc will allocate memory that's aligned to the default platform alignment
> (generally the size of the largest native data type, so 64 or 128 bits typically),
> and will allocate 64 bits just before the returned address for heap state.
I'm not sure what's going on but this code:
#include <stdlib.h>
int main() {
char *x = new char[10];
char *y = (char *)malloc(10);
}
with these compile options:
g++ -g -c -Wa,-alh x.C
Produces a call to _Znam for new, but a call to malloc for new.
--
Dan Espen
|
|
|
|
|
Re: “A damn stupid thing to do”—the origins of C [message #403112 is a reply to message #403104] |
Mon, 21 December 2020 16:01 |
|
Originally posted by: J. Clarke
On Mon, 21 Dec 2020 13:37:04 -0500, Dan Espen <dan1espen@gmail.com>
wrote:
> scott@slp53.sl.home (Scott Lurndal) writes:
>
>> Dan Espen <dan1espen@gmail.com> writes:
>>> Jorgen Grahn <grahn+nntp@snipabacken.se> writes:
>>>
>>>> On Sun, 2020-12-20, Dan Espen wrote:
>>
>>>> ...
>>>> > When objects go out of range, they get freed automatically. Objects
>>>> > have destructors so you can embed all your cleanup in the destructor
>>>> > and it gets invoked when you destroy the object or it goes out of
>>>> > range.
>>>>
>>>> That's a good summary.
>>>>
>>>> And the reason I said "stays in the C tradition" is that it's still
>>>> structs on the stack, or embedded in other objects, with well-defined
>>>> lifetimes.
>>>
>>> Thanks.
>>>
>>> Where I work we had an interesting experience with C++ new vs C malloc.
>>> I forget the exact numbers but with malloc, if you malloc a single byte,
>>> there's overhead but I believe it's something like 4 bytes for thousands
>>> of 1 byte mallocs. Every new takes at least 4 (could have been 8)
>>> additional bytes for every single new. So, this application programmer
>>> came to me and couldn't figure out why his program converted to C++ was
>>> running out a memory. That's when I dug through some control blocks to
>>> figure out what was going on.
>>>
>>> So, warning, if you are using new for lots of small memory chunks, watch
>>> out.
>>
>> The g++ default implementation of the C++ 'new' operator simply calls malloc
>> on linux.
>>
>> malloc will allocate memory that's aligned to the default platform alignment
>> (generally the size of the largest native data type, so 64 or 128 bits typically),
>> and will allocate 64 bits just before the returned address for heap state.
>
> I'm not sure what's going on but this code:
>
> #include <stdlib.h>
> int main() {
> char *x = new char[10];
> char *y = (char *)malloc(10);
> }
>
> with these compile options:
>
> g++ -g -c -Wa,-alh x.C
>
> Produces a call to _Znam for new, but a call to malloc for new.
That last line makes no sense to me. Do I need more coffee or should
one of those "new"s be something else?
|
|
|
Re: “A damn stupid thing to do”—the origins of C [message #403113 is a reply to message #403112] |
Mon, 21 December 2020 16:03 |
Dan Espen
Messages: 3867 Registered: January 2012
Karma: 0
|
Senior Member |
|
|
J. Clarke <jclarke.873638@gmail.com> writes:
> On Mon, 21 Dec 2020 13:37:04 -0500, Dan Espen <dan1espen@gmail.com>
> wrote:
>
>> scott@slp53.sl.home (Scott Lurndal) writes:
>>
>>> Dan Espen <dan1espen@gmail.com> writes:
>>>> Jorgen Grahn <grahn+nntp@snipabacken.se> writes:
>>>>
>>>> > On Sun, 2020-12-20, Dan Espen wrote:
>>>
>>>> > ...
>>>> >> When objects go out of range, they get freed automatically. Objects
>>>> >> have destructors so you can embed all your cleanup in the destructor
>>>> >> and it gets invoked when you destroy the object or it goes out of
>>>> >> range.
>>>> >
>>>> > That's a good summary.
>>>> >
>>>> > And the reason I said "stays in the C tradition" is that it's still
>>>> > structs on the stack, or embedded in other objects, with well-defined
>>>> > lifetimes.
>>>>
>>>> Thanks.
>>>>
>>>> Where I work we had an interesting experience with C++ new vs C malloc.
>>>> I forget the exact numbers but with malloc, if you malloc a single byte,
>>>> there's overhead but I believe it's something like 4 bytes for thousands
>>>> of 1 byte mallocs. Every new takes at least 4 (could have been 8)
>>>> additional bytes for every single new. So, this application programmer
>>>> came to me and couldn't figure out why his program converted to C++ was
>>>> running out a memory. That's when I dug through some control blocks to
>>>> figure out what was going on.
>>>>
>>>> So, warning, if you are using new for lots of small memory chunks, watch
>>>> out.
>>>
>>> The g++ default implementation of the C++ 'new' operator simply calls malloc
>>> on linux.
>>>
>>> malloc will allocate memory that's aligned to the default platform alignment
>>> (generally the size of the largest native data type, so 64 or 128 bits typically),
>>> and will allocate 64 bits just before the returned address for heap state.
>>
>> I'm not sure what's going on but this code:
>>
>> #include <stdlib.h>
>> int main() {
>> char *x = new char[10];
>> char *y = (char *)malloc(10);
>> }
>>
>> with these compile options:
>>
>> g++ -g -c -Wa,-alh x.C
>>
>> Produces a call to _Znam for new, but a call to malloc for new.
>
> That last line makes no sense to me. Do I need more coffee or should
> one of those "new"s be something else?
Yeah, I should have just posted the generated code.
new calls _Znam
malloc calls malloc
--
Dan Espen
|
|
|
Re: “A damn stupid thing to do”—the origins of C [message #403131 is a reply to message #403079] |
Tue, 22 December 2020 12:39 |
Peter Flass
Messages: 8375 Registered: December 2011
Karma: 0
|
Senior Member |
|
|
Thomas Koenig <tkoenig@netcologne.de> wrote:
> Dan Espen <dan1espen@gmail.com> schrieb:
>
>> Yep, z/OS was pretty tightly tied to 8 character all upper case external
>> symbols. IBM seemed to struggle with the problem. It looked like, at
>> first, they didn't want to mess with the linker.
>
> That linker was a pig (or still is, I guess).
>
> I remember, back in the day, writing programs that used a Calcomp
> graphics library on a 3090. They took around 20 minutes, wall
> time, to link. As a student assistant, I was paid by the hour,
> but it as still aggravating.
I never experienced problems with the Linkage Editor. Admittedly it wasn’t
a speed demon, partly because it provided a lot of capabilities not often
used, but it was never a bottleneck.
All academic computers I have encountered have been majestically
underpowered. At one employer administrative users were told to stay off
the system completely the last two weeks of the term so students could get
their projects done. Online response time was measured in minutes, except
when the system would crash due to overload.
No other organization would tolerate this. My last employer had peak
workloads twice a year, so the system was sized to provide good response
time during that period, which meant it was way bigger than was needed the
other 300 days a year. That’s why IBM instituted variable capacity,
whatever it’s called. Pay for a smaller system most of the time and turn on
turbo when you need it.
>
> It was a revelation when the first HP workstations arrived, and
> gnuplot came along.
That’s why workstations and PCs became popular.
--
Pete
|
|
|
Re: “A damn stupid thing to do”—the origins of C [message #403136 is a reply to message #403105] |
Tue, 22 December 2020 13:59 |
scott
Messages: 4237 Registered: February 2012
Karma: 0
|
Senior Member |
|
|
J. Clarke <jclarke.873638@gmail.com> writes:
> On Mon, 21 Dec 2020 13:23:07 -0500, Dan Espen <dan1espen@gmail.com>
> wrote:
>
>> scott@slp53.sl.home (Scott Lurndal) writes:
>>
>>> Dan Espen <dan1espen@gmail.com> writes:
>>>> The very worst thing? Templates. Templates led to some of the worst
>>>> compile/link procedures I've ever seen.
>>>
>>> That was definitely true in 1991. They're completely invisible with
>>> modern compilers.
>>
>> Well, that wasn't true on z/OS when I retired 5 years ago.
>> I really doubt that's changed.
>
> I find myself wondering what Scott means when he says "modern
> compilers".
The GNU Compiler collection.
Greenhills C++.
Wind Rivers Diab Compilers.
The Intel C++ compiler.
and a half dozen others. You can't have a modern compiler when
you are stuck with JCL :-)
|
|
|
Re: “A damn stupid thing to do”—the origins of C [message #403137 is a reply to message #403113] |
Tue, 22 December 2020 14:05 |
scott
Messages: 4237 Registered: February 2012
Karma: 0
|
Senior Member |
|
|
Dan Espen <dan1espen@gmail.com> writes:
> J. Clarke <jclarke.873638@gmail.com> writes:
>
>> On Mon, 21 Dec 2020 13:37:04 -0500, Dan Espen <dan1espen@gmail.com>
>> wrote:
>>
>>> scott@slp53.sl.home (Scott Lurndal) writes:
>>>
>>>> Dan Espen <dan1espen@gmail.com> writes:
>>>> >Jorgen Grahn <grahn+nntp@snipabacken.se> writes:
>>>> >
>>>> >> On Sun, 2020-12-20, Dan Espen wrote:
>>>>
>>>> >> ...
>>>> >>> When objects go out of range, they get freed automatically. Objects
>>>> >>> have destructors so you can embed all your cleanup in the destructor
>>>> >>> and it gets invoked when you destroy the object or it goes out of
>>>> >>> range.
>>>> >>
>>>> >> That's a good summary.
>>>> >>
>>>> >> And the reason I said "stays in the C tradition" is that it's still
>>>> >> structs on the stack, or embedded in other objects, with well-defined
>>>> >> lifetimes.
>>>> >
>>>> >Thanks.
>>>> >
>>>> >Where I work we had an interesting experience with C++ new vs C malloc.
>>>> >I forget the exact numbers but with malloc, if you malloc a single byte,
>>>> >there's overhead but I believe it's something like 4 bytes for thousands
>>>> >of 1 byte mallocs. Every new takes at least 4 (could have been 8)
>>>> >additional bytes for every single new. So, this application programmer
>>>> >came to me and couldn't figure out why his program converted to C++ was
>>>> >running out a memory. That's when I dug through some control blocks to
>>>> >figure out what was going on.
>>>> >
>>>> >So, warning, if you are using new for lots of small memory chunks, watch
>>>> >out.
>>>>
>>>> The g++ default implementation of the C++ 'new' operator simply calls malloc
>>>> on linux.
>>>>
>>>> malloc will allocate memory that's aligned to the default platform alignment
>>>> (generally the size of the largest native data type, so 64 or 128 bits typically),
>>>> and will allocate 64 bits just before the returned address for heap state.
>>>
>>> I'm not sure what's going on but this code:
>>>
>>> #include <stdlib.h>
>>> int main() {
>>> char *x = new char[10];
>>> char *y = (char *)malloc(10);
>>> }
>>>
>>> with these compile options:
>>>
>>> g++ -g -c -Wa,-alh x.C
>>>
>>> Produces a call to _Znam for new, but a call to malloc for new.
>>
>> That last line makes no sense to me. Do I need more coffee or should
>> one of those "new"s be something else?
>
> Yeah, I should have just posted the generated code.
> new calls _Znam
> malloc calls malloc
$ c++filt _Znam
operator new[](unsigned long)
The function 'operator new[]' calls _Znwm which calls malloc.
Dump of assembler code for function _Znam:
=> 0x00007ffff7ac4370 <+0>: sub $0x8,%rsp
0x00007ffff7ac4374 <+4>: callq 0x7ffff7abdba8 <_Znwm@plt>
0x00007ffff7ac4379 <+9>: add $0x8,%rsp
0x00007ffff7ac437d <+13>: retq
0x00007ffff7ac437e <+14>: add $0x1,%rdx
0x00007ffff7ac4382 <+18>: mov %rax,%rdi
0x00007ffff7ac4385 <+21>: je 0x7ffff7ac438c <_Znam+28>
0x00007ffff7ac4387 <+23>: callq 0x7ffff7ac0458 <_Unwind_Resume@plt>
0x00007ffff7ac438c <+28>: callq 0x7ffff7abe578 <__cxa_call_unexpected@plt>
Dump of assembler code for function _Znwm:
=> 0x00007ffff7ac42c0 <+0>: push %rbx
0x00007ffff7ac42c1 <+1>: test %rdi,%rdi
0x00007ffff7ac42c4 <+4>: mov %rdi,%rbx
0x00007ffff7ac42c7 <+7>: mov $0x1,%eax
0x00007ffff7ac42cc <+12>: cmove %rax,%rbx
0x00007ffff7ac42d0 <+16>: mov %rbx,%rdi
0x00007ffff7ac42d3 <+19>: callq 0x7ffff7abe028 <malloc@plt>
0x00007ffff7ac42d8 <+24>: test %rax,%rax
0x00007ffff7ac42db <+27>: je 0x7ffff7ac42e0 <_Znwm+32>
0x00007ffff7ac42dd <+29>: pop %rbx
0x00007ffff7ac42de <+30>: retq
>
> --
> Dan Espen
|
|
|
Re: “A damn stupid thing to do”—the origins of C [message #403138 is a reply to message #403137] |
Tue, 22 December 2020 14:13 |
Dan Espen
Messages: 3867 Registered: January 2012
Karma: 0
|
Senior Member |
|
|
scott@slp53.sl.home (Scott Lurndal) writes:
> Dan Espen <dan1espen@gmail.com> writes:
>> J. Clarke <jclarke.873638@gmail.com> writes:
>>
>>> On Mon, 21 Dec 2020 13:37:04 -0500, Dan Espen <dan1espen@gmail.com>
>>> wrote:
>>>
>>>> scott@slp53.sl.home (Scott Lurndal) writes:
>>>>
>>>> > Dan Espen <dan1espen@gmail.com> writes:
>>>> >>Jorgen Grahn <grahn+nntp@snipabacken.se> writes:
>>>> >>
>>>> >>> On Sun, 2020-12-20, Dan Espen wrote:
>>>> >
>>>> >>> ...
>>>> >>>> When objects go out of range, they get freed automatically. Objects
>>>> >>>> have destructors so you can embed all your cleanup in the destructor
>>>> >>>> and it gets invoked when you destroy the object or it goes out of
>>>> >>>> range.
>>>> >>>
>>>> >>> That's a good summary.
>>>> >>>
>>>> >>> And the reason I said "stays in the C tradition" is that it's still
>>>> >>> structs on the stack, or embedded in other objects, with well-defined
>>>> >>> lifetimes.
>>>> >>
>>>> >>Thanks.
>>>> >>
>>>> >>Where I work we had an interesting experience with C++ new vs C malloc.
>>>> >>I forget the exact numbers but with malloc, if you malloc a single byte,
>>>> >>there's overhead but I believe it's something like 4 bytes for thousands
>>>> >>of 1 byte mallocs. Every new takes at least 4 (could have been 8)
>>>> >>additional bytes for every single new. So, this application programmer
>>>> >>came to me and couldn't figure out why his program converted to C++ was
>>>> >>running out a memory. That's when I dug through some control blocks to
>>>> >>figure out what was going on.
>>>> >>
>>>> >>So, warning, if you are using new for lots of small memory chunks, watch
>>>> >>out.
>>>> >
>>>> > The g++ default implementation of the C++ 'new' operator simply calls malloc
>>>> > on linux.
>>>> >
>>>> > malloc will allocate memory that's aligned to the default platform alignment
>>>> > (generally the size of the largest native data type, so 64 or 128 bits typically),
>>>> > and will allocate 64 bits just before the returned address for heap state.
>>>>
>>>> I'm not sure what's going on but this code:
>>>>
>>>> #include <stdlib.h>
>>>> int main() {
>>>> char *x = new char[10];
>>>> char *y = (char *)malloc(10);
>>>> }
>>>>
>>>> with these compile options:
>>>>
>>>> g++ -g -c -Wa,-alh x.C
>>>>
>>>> Produces a call to _Znam for new, but a call to malloc for new.
>>>
>>> That last line makes no sense to me. Do I need more coffee or should
>>> one of those "new"s be something else?
>>
>> Yeah, I should have just posted the generated code.
>> new calls _Znam
>> malloc calls malloc
>
> $ c++filt _Znam
> operator new[](unsigned long)
>
> The function 'operator new[]' calls _Znwm which calls malloc.
>
> Dump of assembler code for function _Znam:
> => 0x00007ffff7ac4370 <+0>: sub $0x8,%rsp
> 0x00007ffff7ac4374 <+4>: callq 0x7ffff7abdba8 <_Znwm@plt>
> 0x00007ffff7ac4379 <+9>: add $0x8,%rsp
> 0x00007ffff7ac437d <+13>: retq
> 0x00007ffff7ac437e <+14>: add $0x1,%rdx
> 0x00007ffff7ac4382 <+18>: mov %rax,%rdi
> 0x00007ffff7ac4385 <+21>: je 0x7ffff7ac438c <_Znam+28>
> 0x00007ffff7ac4387 <+23>: callq 0x7ffff7ac0458 <_Unwind_Resume@plt>
> 0x00007ffff7ac438c <+28>: callq 0x7ffff7abe578 <__cxa_call_unexpected@plt>
>
> Dump of assembler code for function _Znwm:
> => 0x00007ffff7ac42c0 <+0>: push %rbx
> 0x00007ffff7ac42c1 <+1>: test %rdi,%rdi
> 0x00007ffff7ac42c4 <+4>: mov %rdi,%rbx
> 0x00007ffff7ac42c7 <+7>: mov $0x1,%eax
> 0x00007ffff7ac42cc <+12>: cmove %rax,%rbx
> 0x00007ffff7ac42d0 <+16>: mov %rbx,%rdi
> 0x00007ffff7ac42d3 <+19>: callq 0x7ffff7abe028 <malloc@plt>
> 0x00007ffff7ac42d8 <+24>: test %rax,%rax
> 0x00007ffff7ac42db <+27>: je 0x7ffff7ac42e0 <_Znwm+32>
> 0x00007ffff7ac42dd <+29>: pop %rbx
> 0x00007ffff7ac42de <+30>: retq
So, on z/OS a 'new' acquired extra bytes in the malloced memory.
You might ask for 8 but 12 were acquired.
I see an add in there. Any idea why?
--
Dan Espen
|
|
|
Re: “A damn stupid thing to do”—the origins of C [message #403140 is a reply to message #403136] |
Tue, 22 December 2020 14:23 |
Dan Espen
Messages: 3867 Registered: January 2012
Karma: 0
|
Senior Member |
|
|
scott@slp53.sl.home (Scott Lurndal) writes:
> J. Clarke <jclarke.873638@gmail.com> writes:
>> On Mon, 21 Dec 2020 13:23:07 -0500, Dan Espen <dan1espen@gmail.com>
>> wrote:
>>
>>> scott@slp53.sl.home (Scott Lurndal) writes:
>>>
>>>> Dan Espen <dan1espen@gmail.com> writes:
>>>> >The very worst thing? Templates. Templates led to some of the worst
>>>> >compile/link procedures I've ever seen.
>>>>
>>>> That was definitely true in 1991. They're completely invisible with
>>>> modern compilers.
>>>
>>> Well, that wasn't true on z/OS when I retired 5 years ago.
>>> I really doubt that's changed.
>>
>> I find myself wondering what Scott means when he says "modern
>> compilers".
>
> The GNU Compiler collection.
> Greenhills C++.
> Wind Rivers Diab Compilers.
> The Intel C++ compiler.
>
> and a half dozen others. You can't have a modern compiler when
> you are stuck with JCL :-)
Not sure how JCL comes into the picture.
As I've explained before, I developed z/OS tools to do compiles using
CLIST.
Of course you could also invoke the exact same compiler using z/OS
Unix system services.
IBM came up with 2 C compilers. I think IBM might have done the first
one, C/370. When they wanted ANSI C they farmed out compiler
development, this was around 2000. I don't think I ever found out
who did the actual work, I just got the impression it wasn't IBM.
I think your point might be, it's hard to do seamless stuff when
you want to cater to 100% compatibility for everything that's
come before. But it wasn't JCL, object code format, load module format,
the binder, inter-language calls to all the other languages might have
played a role.
--
Dan Espen
|
|
|
Re: “A damn stupid thing to do”—the origins of C [message #403144 is a reply to message #403138] |
Tue, 22 December 2020 16:55 |
|
Originally posted by: Vir Campestris
On 22/12/2020 19:13, Dan Espen wrote:
> scott@slp53.sl.home (Scott Lurndal) writes:
>
>> Dan Espen <dan1espen@gmail.com> writes:
>>> J. Clarke <jclarke.873638@gmail.com> writes:
>>>
>>>> On Mon, 21 Dec 2020 13:37:04 -0500, Dan Espen <dan1espen@gmail.com>
>>>> wrote:
>>>>
>>>> > scott@slp53.sl.home (Scott Lurndal) writes:
>>>> >
>>>> >> Dan Espen <dan1espen@gmail.com> writes:
>>>> >>> Jorgen Grahn <grahn+nntp@snipabacken.se> writes:
>>>> >>>
>>>> >>>> On Sun, 2020-12-20, Dan Espen wrote:
>>>> >>
>>>> >>>> ...
>>>> >>>>> When objects go out of range, they get freed automatically. Objects
>>>> >>>>> have destructors so you can embed all your cleanup in the destructor
>>>> >>>>> and it gets invoked when you destroy the object or it goes out of
>>>> >>>>> range.
>>>> >>>>
>>>> >>>> That's a good summary.
>>>> >>>>
>>>> >>>> And the reason I said "stays in the C tradition" is that it's still
>>>> >>>> structs on the stack, or embedded in other objects, with well-defined
>>>> >>>> lifetimes.
>>>> >>>
>>>> >>> Thanks.
>>>> >>>
>>>> >>> Where I work we had an interesting experience with C++ new vs C malloc.
>>>> >>> I forget the exact numbers but with malloc, if you malloc a single byte,
>>>> >>> there's overhead but I believe it's something like 4 bytes for thousands
>>>> >>> of 1 byte mallocs. Every new takes at least 4 (could have been 8)
>>>> >>> additional bytes for every single new. So, this application programmer
>>>> >>> came to me and couldn't figure out why his program converted to C++ was
>>>> >>> running out a memory. That's when I dug through some control blocks to
>>>> >>> figure out what was going on.
>>>> >>>
>>>> >>> So, warning, if you are using new for lots of small memory chunks, watch
>>>> >>> out.
>>>> >>
>>>> >> The g++ default implementation of the C++ 'new' operator simply calls malloc
>>>> >> on linux.
>>>> >>
>>>> >> malloc will allocate memory that's aligned to the default platform alignment
>>>> >> (generally the size of the largest native data type, so 64 or 128 bits typically),
>>>> >> and will allocate 64 bits just before the returned address for heap state.
>>>> >
>>>> > I'm not sure what's going on but this code:
>>>> >
>>>> > #include <stdlib.h>
>>>> > int main() {
>>>> > char *x = new char[10];
>>>> > char *y = (char *)malloc(10);
>>>> > }
>>>> >
>>>> > with these compile options:
>>>> >
>>>> > g++ -g -c -Wa,-alh x.C
>>>> >
>>>> > Produces a call to _Znam for new, but a call to malloc for new.
>>>>
>>>> That last line makes no sense to me. Do I need more coffee or should
>>>> one of those "new"s be something else?
>>>
>>> Yeah, I should have just posted the generated code.
>>> new calls _Znam
>>> malloc calls malloc
>>
>> $ c++filt _Znam
>> operator new[](unsigned long)
>>
>> The function 'operator new[]' calls _Znwm which calls malloc.
>>
>> Dump of assembler code for function _Znam:
>> => 0x00007ffff7ac4370 <+0>: sub $0x8,%rsp
>> 0x00007ffff7ac4374 <+4>: callq 0x7ffff7abdba8 <_Znwm@plt>
>> 0x00007ffff7ac4379 <+9>: add $0x8,%rsp
>> 0x00007ffff7ac437d <+13>: retq
>> 0x00007ffff7ac437e <+14>: add $0x1,%rdx
>> 0x00007ffff7ac4382 <+18>: mov %rax,%rdi
>> 0x00007ffff7ac4385 <+21>: je 0x7ffff7ac438c <_Znam+28>
>> 0x00007ffff7ac4387 <+23>: callq 0x7ffff7ac0458 <_Unwind_Resume@plt>
>> 0x00007ffff7ac438c <+28>: callq 0x7ffff7abe578 <__cxa_call_unexpected@plt>
>>
>> Dump of assembler code for function _Znwm:
>> => 0x00007ffff7ac42c0 <+0>: push %rbx
>> 0x00007ffff7ac42c1 <+1>: test %rdi,%rdi
>> 0x00007ffff7ac42c4 <+4>: mov %rdi,%rbx
>> 0x00007ffff7ac42c7 <+7>: mov $0x1,%eax
>> 0x00007ffff7ac42cc <+12>: cmove %rax,%rbx
>> 0x00007ffff7ac42d0 <+16>: mov %rbx,%rdi
>> 0x00007ffff7ac42d3 <+19>: callq 0x7ffff7abe028 <malloc@plt>
>> 0x00007ffff7ac42d8 <+24>: test %rax,%rax
>> 0x00007ffff7ac42db <+27>: je 0x7ffff7ac42e0 <_Znwm+32>
>> 0x00007ffff7ac42dd <+29>: pop %rbx
>> 0x00007ffff7ac42de <+30>: retq
>
> So, on z/OS a 'new' acquired extra bytes in the malloced memory.
> You might ask for 8 but 12 were acquired.
>
> I see an add in there. Any idea why?
>
Which one did you mean?
0x00007ffff7ac4379 <+9>: add $0x8,%rsp
is cleaning up the stack
0x00007ffff7ac437e <+14>: add $0x1,%rdx
is after the ret at the end of the function, and in whatever the next
one does.
Andy
|
|
|
Re: “A damn stupid thing to do”—the origins of C [message #403145 is a reply to message #403138] |
Tue, 22 December 2020 16:58 |
scott
Messages: 4237 Registered: February 2012
Karma: 0
|
Senior Member |
|
|
Dan Espen <dan1espen@gmail.com> writes:
> scott@slp53.sl.home (Scott Lurndal) writes:
>
>>>> >I'm not sure what's going on but this code:
>>>> >
>>>> >#include <stdlib.h>
>>>> >int main() {
>>>> > char *x = new char[10];
>>>> > char *y = (char *)malloc(10);
>>>> >}
>>>> >
>>>> >with these compile options:
>>>> >
>>>> >g++ -g -c -Wa,-alh x.C
>>>> >
>>>> >Produces a call to _Znam for new, but a call to malloc for new.
>>>>
>>>> That last line makes no sense to me. Do I need more coffee or should
>>>> one of those "new"s be something else?
>>>
>>> Yeah, I should have just posted the generated code.
>>> new calls _Znam
>>> malloc calls malloc
>>
>> $ c++filt _Znam
>> operator new[](unsigned long)
>>
>> The function 'operator new[]' calls _Znwm which calls malloc.
>>
>> Dump of assembler code for function _Znam:
>> => 0x00007ffff7ac4370 <+0>: sub $0x8,%rsp
>> 0x00007ffff7ac4374 <+4>: callq 0x7ffff7abdba8 <_Znwm@plt>
>> 0x00007ffff7ac4379 <+9>: add $0x8,%rsp
>> 0x00007ffff7ac437d <+13>: retq
>> 0x00007ffff7ac437e <+14>: add $0x1,%rdx
>> 0x00007ffff7ac4382 <+18>: mov %rax,%rdi
>> 0x00007ffff7ac4385 <+21>: je 0x7ffff7ac438c <_Znam+28>
>> 0x00007ffff7ac4387 <+23>: callq 0x7ffff7ac0458 <_Unwind_Resume@plt>
>> 0x00007ffff7ac438c <+28>: callq 0x7ffff7abe578 <__cxa_call_unexpected@plt>
>>
>> Dump of assembler code for function _Znwm:
>> => 0x00007ffff7ac42c0 <+0>: push %rbx
>> 0x00007ffff7ac42c1 <+1>: test %rdi,%rdi
>> 0x00007ffff7ac42c4 <+4>: mov %rdi,%rbx
>> 0x00007ffff7ac42c7 <+7>: mov $0x1,%eax
>> 0x00007ffff7ac42cc <+12>: cmove %rax,%rbx
>> 0x00007ffff7ac42d0 <+16>: mov %rbx,%rdi
>> 0x00007ffff7ac42d3 <+19>: callq 0x7ffff7abe028 <malloc@plt>
>> 0x00007ffff7ac42d8 <+24>: test %rax,%rax
>> 0x00007ffff7ac42db <+27>: je 0x7ffff7ac42e0 <_Znwm+32>
>> 0x00007ffff7ac42dd <+29>: pop %rbx
>> 0x00007ffff7ac42de <+30>: retq
>
> So, on z/OS a 'new' acquired extra bytes in the malloced memory.
> You might ask for 8 but 12 were acquired.
>
> I see an add in there. Any idea why?
The first 'sub' allocates 8 bytes of stack space, then _Znam
calls _Znwm (which demangles to "operator new(unsigned long)"),
when _Znwm returns, the 'add' deallocates the 8 bytes of stack
space. The code following the 'retq' is for unwinding the stack
in case an exception is thrown; the add in this code is opaque
to me, but I assume that the exception 'throw' code which invokes
this has left something interesting in %rdx which is the register
in which the third parameter to a function is passed to the function
in x86_64 mode (%rdi, %rsi, %rdx, %rcx, %r8, %r9 carry the first six
function parameters into any called x86_64 function in the standard
linux ABI). The subsequent 'je' will test the condition flags set
by the add and either finish the unwind or ABEND. My guess is that
it's the lexical stack depth as a negative integer, and if it reaches zero
after the add, there are no stack frames left to unwind.
There was similar prologue code after the retq for _Znwm, but I didn't
past that in the OP.
|
|
|
|