C++ doesn't have garbage collection. Resource finalization is deterministic and performed by a destructor. Also, C++ has POD structs that map data in the same way as assembler, C, COBOL, PL/1 and access record based data sources in exactly the same way.

Java doesn't have records and does not support value types (like C#) so you can't define a class with fixed sized buffers on the stack. To map a record from a legacy data source you have to use a byte array and manipulate it using a class like a ByteBuffer. This is mostly done using code generation tools to serialize the data into the corresponding Java types. Of course, there is a lot of overhead in doing so but that's why IBM have provided zIIP processors. If it wasn't for zIIPs Java would not be viable on z/OS. IBM have done a lot of work on the hardware stack to make Java more efficient. The z114 has a new facility to help implement pauseless garbage collection.

The z/OS JVM is highly optimized for heap memory management and efficient use of cache lines. It provides excellent features for concurrent programming and provides collection libraries for concurrent data structures using lock-free/wait-free algorithms. For example, ConcurrentHashMap can make use of transactional memory to serialize access The string class uses the new SIMD instructions for search/replace methods. Once the JIT compiler has warmed up it will inline the hot spots to improve code locality and mitigate instruction caches misses.

The most important thing to bear in mind is that replacing a legacy application written in assembler with Java is madness and doomed to failure. IMO, Java should be used only for new applications or modernizing and existing legacy application.


On 23/04/2018 6:29 AM, Hobart Spitz wrote:
Somewhere, maybe in a different branch of this topic, there was a
discussion about the pros and cons of replacing Assembler with Java.  I
apologize for posting here if it's the wrong place, but I can't seem to
find the original discussion, and I have a question that seems relevant and
important, IMHO.

That's said, I can answer the question, for C/C++, as follows.  (I pose the
question for Java, below.)

*With the *nix/C record and string models, there are these issues:*

    1. Errant/unexpected/unintended pieces of binary data in a text
    file/string can break something.
    2. Separate functions/methods/techniques must be used to manipulate text
    files/strings versus binary files/string. You *must* know what you are
    dealing with up front, and/or somehow code logic for both. (I'm not sure
    the latter is possible in the general case.)
    3. Even with *nix/C oriented machine instructions, the need to inspect
    all characters up to a target point results in performance killing cache
    flooding.
    4. C++ does garbage collection resulting in "pauses" in forward
    progress, and working set, caching, and CPU spikes, among other things.

Let's call these attributes fragility, productivity, and efficiency,
respectively, for the convenience convenience.  C has issue with these
characteristics.

As most of the readers here know, mainframe style records and strings do
not suffer from these limitation.  When the length of a string/record is
known external to the data contents, you can manipulate any platform-native
data in z/OS, z/VM without it breaking due to something in the data, you
write the same code regardless of what you are dealing with, and, less
obviously, any activity that skips a cache can avoid a cache line promotion
saving processor resources.

So, my "burning" question for Java is, which, if any, of these above issues
(data fragility, coding productivity, efficiency, and garbage collections)
does Java share with C/C++.

If Java suffers from all or most of the issues, then I would say replacing
Assembler with Java is pretty much out of the question.  On the other hand,
if Java suffers few or none of the above issues, it might be viable to
replace Assembler with Java (ignoring other issues, like cost, testing,
compatibility, data porting, etc.)

To sum up:  Does Java use a similar record/string model to that of C/C++,
and does it do garbage collection similarly.

Thanks in advance for satisfying my curiosity.











OREXXMan
JCL is the buggy whip of 21st century computing.
We want Pipelines in the z/OS base.

On Sat, Apr 21, 2018 at 12:29 PM, Barry Merrill <[email protected]> wrote:

In 1975 there was a BOF, Bird's of a Feather Session on Year 2000 Concerns
at the SPRING SHARE meeting, as I recall.  BOF's were spontaneous evening
meetings posted/scheduled usually that day.

Barry


Herbert W. “Barry” Merrill, PhD
President-Programmer
Merrill Consultants
MXG Software
10717 Cromwell Drive
Dallas, TX 75229
www.mxg.com
[email protected]



-----Original Message-----
From: IBM Mainframe Discussion List [mailto:[email protected]] On
Behalf Of Paul Gilmartin
Sent: Friday, April 20, 2018 5:27 PM
To: [email protected]
Subject: Re: IRS - 60-Year-Old IT System Failed on Tax Day Due to New
Hardware (nextgov.com)

On Fri, 20 Apr 2018 19:25:54 +0000, Lester, Bob wrote:
I agree with both you and Gil.  But, how many programmers in the 60s,
70s, even 80s were thinking about Y2K?  Sure, the really good ones were,
but what about the other 80%?
....and, Y2K came off without a hitch...(FSVO - "hitch")    😊

-----Original Message-----
From: IBM Mainframe Discussion List Porowski, Kenneth
Sent: Friday, April 20, 2018 1:20 PM

That was due to lack of foresight by the programmer not due to the age of
the system.
True in the sense that it affected one-year-old computers as much as older
computers running th same software.

I'm disappointed that this thread has so much focused on Y2K which I meant
only as an extreme example.  Things change.  Y2K was only more precisely
forseeable.

Increasing complexity of the tax code requires new logic.  Inflation and
rate escalation may have made some data fields inadequate in size.
E-filing requires network interfaces and code to support them and causes
the one-day spike in workload.  I gather from these fora that COBOL is not
comfortably suited to TCP/IP.  IBM bet that SNA/VTAM could crush TCP/IP and
customers were the losers.  IBM bet that EBCDIC could crush ASCII and
customers were the losers.  And customers bet that COBOL skills would
remain in the forefront of availability.

-- gil

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to [email protected] with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to