Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
20 Years in the Making, GnuCOBOL Is Ready for Industry (thenewstack.io)
180 points by cglong on March 16, 2024 | hide | past | favorite | 114 comments


>>Get those punch cards back out!

I get that's (probably!) a joke, but it misrepresents COBOL as something completely stuck in the 70s. And, y'know, it isn't exactly the fanciest language in the world, but we still have several programmers on our project and they're spitting out new code every day of their life, no punchcards:).

(And it's not on a mainframe either - it's running primarily on AIX, with some of Windows and Linux).


I've spent some time working in finance, so I've actually worked with COBOL developers personally in multiple different roles. In all those cases they were maintaining legacy applications that ran on IBM mainframes. Why would a company choose to use COBOL if they weren't restricted to what ran on IBM's mainframe infrastructure? Serious question, not an attack.

I get that many of these legacy applications are some of the most battle-tested things in existence, and do what they do very well. I've seen them in action personally. However I'm also under the impression that COBOL is just not that amazing compared with modern alternatives: It's not easy to write, or maintain, and (as far as I understand) the things that make it 'fast' have more to do with the platform than COBOL itself.

I'd love to know more about why someone would choose COBOL today, if anyone can fill me in.


> I'd love to know more about why someone would choose COBOL today, if anyone can fill me in.

I doubt anybody is choosing to start a new greenfield system in COBOL in 2024.

But, if you have an existing COBOL code base, and the business is asking for new features, you have two basic choices (1) write new modules for the existing system in COBOL (2) write the new modules in a more mainstream language (Java, C#, whatever) and have the existing COBOL modules integrate with those new modules (e.g. using REST)

Both options have their pros and cons. If you aren't already doing (2), then (in the short-term at least) (1) is the easier path.


TIL, COBOL has been able to make rest calls for almost 20 years: https://stackoverflow.com/questions/52136482/how-can-i-call-...


That link is about COBOL running on IBM CICS. It doesn’t apply to COBOL in other environments (both non-CICS IBM and non-IBM)

But, pretty much every COBOL implementation can do this nowadays, even if the details differ between implementations.

COBOL can call C code, so if have a C library to do something, you can use it from COBOL


You're right. I think I misread the original post, and assumed that since they were talking about running COBOL on more modern systems that they were using it to solve new business problems, hence my confusion.


> I think I misread the original post, and assumed that since they were talking about running COBOL on more modern systems that they were using it to solve new business problems, hence my confusion.

Sometimes, a new business problem can be addressed (whether in whole or in part) by changes or enhancements to an existing IT system; in that sense, some people use COBOL for new business problems even in 2024.


We actually rely upon two different operating environments for COBOL that do not originate from IBM.

The first is OS2200. Our final major application on this platform was complete by 1970, and links COBOL into assembler that accesses the hierarchical DMS database. The first SMP port of UNIX was to this hardware:

https://en.m.wikipedia.org/wiki/OS_2200

The second is VMS, specifically the VAX variety. VMS bundled a COBOL compiler, which we used to write ACMS applications.

https://en.m.wikipedia.org/wiki/OpenVMS

We continue to produce new COBOL code for both.


Are you able to find COBOL programmers to hire, or is it like Rust where a lot of jobs are "know C++ and we'll train you"?


My buddy's wife has a business degree and her first company had her learn Cobol for a couple of months (never had programmed before). I'm not sure where that went after the training, but apparently it's still a thing (this was only a few years ago).


We have several. Hiring is possible.

Hiring people with experience of the respective OS is more difficult.


C++ is not that similar to COBOL. COBOL is closer to Crystal Reports or SAP (ABAP). There are plenty of people with experience in those domains around today. Probably more than C++.


Is your org considering transitioning to OpenVMS on x86?


No, TDMS wasn't ported to the Alpha, so migration was never an option.


> No, TDMS wasn't ported to the Alpha, so migration was never an option.

VSI has TDMS for both Alpha and Integrity: https://vmssoftware.com/products/tdms/

I assume they'll probably come out with an x86-64 version at some point


What's the long-term plan then? Rehosting on a different OS?


> Why would a company choose to use COBOL if they weren't restricted to what ran on IBM's mainframe infrastructure?

COBOL has nothing to do with just IBM mainframes even though that is what it mostly runs on. The second big platform that runs COBOL programmes (heh, not apps!) on is OpenVMS (whatever hardware it runs on today) although the number of OpenVMS installations has seriously dwindled in the recent decade.

The reason why companies no longer choose COBOL is the mostly dead ecosystem and the lack of the fresh meat on the job market as a consequence of that.

If we imagine a parallel reality where COBOL is still thriving, many companies would almost absolutely choose COBOL for new projects because of its safe memory model at the very least – there is no memory corruption, no buffer overflows, no null pointer exceptions. If there are bugs in a COBOL programme, the bugs are related either to the logic or to the data handling.

With most programming languages of today, you have all of that too, plus compounding memory related issues even in considered «safe», garbage collected programming languages. The business generally does not care about the art of fine programming unless it will provide a substantial ROI (e.g. vastly reduced running and operational costs or it will yield a higher revenue), so the business would absolutely choose (or consider) COBOL in such a parallel reality.


COBOL is very easy to learn and write if you use it for what it was intended for. If you try to make a crud app or read from a web service, you are going to have a bad time. Trust me, I know. It can be done, but it is not pretty and was kind of shoe-horned in.

We have been trying to get rid of all of our COBOL and get off of the mainframe (many apps have been) for the past 20+ years. We are getting closer but priorities keep changing which keeps delaying that goal.

I love doing backend and db work so I would gladly switch to Go for my new projects. But, I would still need to use DB2, which (surprise surprise) is on the mainframe.


Db2 actually implements SQL/PSM, which is more than can be said for Sybase/Microsoft SQL Server.

I'm not sure if this extends to all three Db2 variants, namely Mainframe, AS/400 derivatives, and UDB for Linux and Windows (I hope so).

https://en.m.wikipedia.org/wiki/SQL/PSM

IBM supposedly got the PSM code from EnterpriseDB/Postgres.

https://www.enterprisedb.com/news/enterprisedb-and-ibmr-coll...

https://www.internetnews.com/blog/ibm-gets-compatible-with-o...


What they got from EnterpriseDB is the Oracle (PL/SQL) compatibility. SQL/PSM was built by IBM themselves.


This is actually the first time I have heard of SQL/PSM. From Wiki, looks like it is for stored procedures. We do not do stored procedures at all on our DB2 instances. Heck, we don't even use triggers either. All our mainframe/DB2 business logic is in COBOL.


Wow! I've worked with SMS sending gateway and cofounders cannot avoid temptation to use stored procedures for logic (we use Postgres, and logic in Perl). Because of this, last years installations limited by HDD performance, but fortunately, after crisis of 2008, we don't need much scaling.


A lot of COBOL in finance runs on x86 servers running MicroFocus Cobol on RHEL.


And on Xenix earlier, and on SVR4 and SVR3.


Suspected System V, and there was definitely a lot on AIX if only because there was CICS for AIX available (maybe still is? Paradoxically it was easier for me to get the mainframe version running... than any AIX system...)

MicroFocus on RHEL is stuff I've seen live recently, including new development, and I think there are still companies buying new releases of COBOL programs.

Similarly, MUMPS is very much alive in financial market, with completely new development being done even if mostly as small component of biggerz usually Java apps


Interesting about MUMPS.


I cannot speak for all the COBOL implementations, but what I've seen is that a company will purchase and implement a new COTS ("Commercial Off The Shelf") back-office application such as ERP - Financials, HR, CRM, EPM, etc. Something that's boring to techies but critical bread and butter for a company.

That application may well have some COBOL in it. It's sold as new, but such big applications will have bits that were written 3 months ago and bits that were written 20 years ago.

Now the company has a shiny new application that has some COBOL.

ERPs are living applications. Legislation changes, markets change, company's needs change, so ERPs are both customized and updated and expanded. So you need some COBOL expertise depending how active you want to be about it.

(note: a typical techie, including myself a decade ago, will typically see insanity in this and think something like "Payroll???? How hard can THAT be??". Like being a parent or living through a civil war, nobody can truly explain this to another human being that hasn't been through it :-)


> COBOL is just not that amazing compared with modern alternatives

To be honest, I'm new in field of mainframes, I'm just few weeks playing with Hercules.

But, after more than 20 years in industry, on micro- level (I near all my life spent with all sorts of x86 and with networking hardware, but also few years with Macs and with microcontrollers), I must say, only thing comparable to mainframes in reliability is Erlang.

And if I will take your words, just replace with regex /cobol/erlang/i, most will become truth, because, Erlang have much less LOC numbers, but looks like cost of massive reliability is same - not easy to write, or maintain (it is just too different from classic modern programming way), also Erlang is not very fast (experienced people said, it is close to Perl).

All these matter even considering Erlang syntax directly derived from Prolog, nearly best syntax in the modern world!

BTW if you want, I'm open to talk about how to make Erlang better for today, or even, talk possible ways to engage COBOL with Erlang on modern hardware.


BTW you could consider me guy from Philippe Kahn world (I begun with Turbo C and Turbo Pascal, than was Delphi, eventually Modula, other pillar of reliable world).


Nobody chooses COBOL as a language for creating new systems today. It is maintenance of existing systems, primarily but not exclusively, running on IBM mainframe and midrange servers.


I understand that old programs are extremely stable, but I don't understand why they are relevant. The world changes so much, why are these programs still useful?


That's a very techie view (and I speak as, largely, one:)

Core accounting principles don't change so much. Core payroll principles don't change so much. Core finance principles don't change so much.

We have a technology-oriented view that "world changes so much", but there are domains where that's true, and domains where that's not true.

Several times in my life I had the luck to have a mentor ask me, when I propose some obvious technological change and improvement, "what's the business advantage? If I let you spend X amount of hours over Y days to accomplish technical change Z, in which way will the business be better?"

In the world of startups and ___-tech, technology stacks matters and change and it's a whirlwind world of frameworks and languages and webapps stuff.

But in the core, back-office, cost-centre business of the company... the rules are complicated and numerous but in a way stable enough that solid stable complex software from 30 years ago still has value.


The world changes little-by-little, not all at once.

Let's say your application (100k LoC of Cobol) handles payroll. Your government introduces some new rules about how paternity leave should be handled. You have two options, rewrite the 100K lines into Java and add the new rule, or just add the new rule in the Cobol app. The latter is cheaper, at least in the long term, so that's what you do. Now you have 101k lines of Cobol.

20 years, four tax reforms, three mergers, five international expansions and fifty lawsuits later, you have migrated 100k lines of Cobol to Java, but your app grew and is now 3 million lines.


I've worked at organisations which have mainframe applications that have been running for almost 50 years. One of the funnier reasons I've heard about why these '100k lines of COBOL' programs never get rewritten is that no one really knows the full extent of what they do, and even if they do know the full extent of what they do, no one really knows the full extent of why. The business requirements that these programs cover are long since lost, but the program still remains. Even if this isn't a 100% true anecdote, it's an amusing one.


It is a 100% true anecdote at any number of businesses. It's philosophically enlightening in a way and related to an increasingly profound adage of "Purpose of the system is what it does" (which can be applied to IT and political and social and school and other systems through various lenses).

There are any number of systems out there which are "self documenting" in a weirder sense that what they do is what the rules are, rather than other way around. There's no external document or business process ruleset that describes what they do at the level of detail and accuracy that code does. There are a myriad edge cases built in which are important and relevant and likely still true (until somebody actively decides to change some rules) but not well documented or understood. I'm working on a government payroll system and it is mind.boggingly more complex than ever I could have possibly imagined and it still blows my mind every day. We have tens of thousands of time & labour rules due to hundred unions making thousands of negotiations over decades, on top of complex baseline payroll and taxation rules across governments and provinces and levels.

Attempts to rewrite such a software almost always, almost inevitably, take much longer and cost much more not just because of technical and architectural challenges, but because of, well, hubris - assumption that just because we can migrate webserver code across platforms, we can migrate a payroll system across platforms as well. Turns out one is orders of magnitude more complex than the other, we are just not used to thinking that way :).


> We have tens of thousands of time & labour rules due to hundred unions making thousands of negotiations over decades, on top of complex baseline payroll and taxation rules across governments and provinces and levels.

Working in healthcare/insurance right now and I can tell it's a very similar tangle of rules about which insurance plan covers what. The mess that's US healthcare ties up at least a trillion dollars a year in servicing its complexities. While it pays my bills, it pains me that a country can self-inflict such penalties just to not have some sort of government provided universal healthcare system.


Heck, I've got code I wrote ten years ago and no idea why it was done that way except for a vague memory of a request that I implemented but was never really used.


One plausible way to rewrite such a program would be to make it more modular. It's already likely all those rules are separated in modules and it is (allegedly) possible to call routines in different languages from COBOL code. You can, then, start writing those rules in Java, Python, C, or Rust (there is a Rust for z/OS!).

OTOH, you now have a more complicated and, perhaps, more brittle business-critical application you need to tend to.


> It's already likely all those rules are separated in modules

Have you seen much COBOL code? I had some dealings with a public school budgeting and payroll system back in the 80's, and I can tell you, it was a complete mess, with each program in its own source file (except some COPYs for record layouts). Here's just a simple " example:

COBOL has statements divided into paragraphs, each with a paragraph name. You can have for example, PARA-A down through PARA-D, one after the other. You can say PERFORM PARA-A and it's somewhat like calling a function, where it executes the paragraph and then comes back. Or you can say PERFORM PARA-A THROUGH PARA-C, and that's like calling a function made up of those 3 paragraphs. Or you can fall into PARA-A, and it executes the paragraphs one after the other until a GOTO occurs.

Another completely annoying thing is looping: PERFORM PARA-A VARYING IX FROM 1 BY 1 UNTIL IX > 10 is a for loop. The thing is, the code you execute is somewhere else, maybe 10 pages away. Nowadays there are probably some fancy code editors to help with these kinds of issues, but "back in the day", they were annoying as hell, and how good of a COBOL programmer you were depended on the depth of your mental execution stack.

The main advantages I think COBOL had/has is:

- no memory allocations; you spelled it all out in the DATA DIVISION.

- no various sizes of integers, floats, etc. You spelled it out with PICTUREs, like S99V99 is a signed number with 2 digits to the right and left of the decimal point. That's what it was on every machine.

- no buffer overflows; if you have PIC X(25), the thing is 25 characters, period. You try to move a PIC X(30) to a PIC X(25), and it just chops off the last 5 bytes. You move a PIC X(10) to a PIC X(25) and it pads with spaces. Very simple rules.

Combined, these fairly unique features make COBOL extremely portable.


> The world changes so much

Yes and no. Really, developed world have very stable laws, this is one of pillars of power.

So, new companies, like Uber, Apple (appstore), Microsoft (when created XBox), etc, sure have to create new infrastructure from scratch, because their new vision is their concurrent advantage.

But, for old businesses, like financial, energy, some others, most important to use all measures to cut transaction costs, and they achieve this by reusing old information infrastructure (not only this, but this is important part of equation). Unfortunately for old big businesses, their business models created when only mainframes could withstand their scale (clouds with micro- become mature enough, I think in 1990s; mini's just not considered as replacement of big machines), so big companies have large heritage on mainframes.

I know few cases, when large company move their infrastructure to cloud, but as I know, they mostly prefer to remain with old but reliable mainframes if possible.

At end of my speech I could add, I myself could be considered hunter from micro-world, who hunt for people who wish to switch (to micro and clouds), so I really know how much it cost :)

But to be honest, I feel romantic feelings to mainframes, as they are important part of history, I think we should save them for future generations.


Most of these systems are likely software ships of Theseus.

Very few lines of code will be unmodified from their original state, but at any given point in time the changes required will only have been small.

Hence it’s always been a pragmatic choice to do small modifications rather than a ground-up reimplementation.


The old programs do their job. They work and they're stable.

But I guess you can try to change that if you want.


COBOL is my next career move. I'm tired of these modern languages.


With luck you will able to enjoy OOP if the compiler supports COBOL 2014, or even async/await with COBOL 2023. :)


Let us join hands in a moment of silent prayer for those unfortunate souls


Eh. There are ways in which their lives are easier and more zen :)


And some of those programmers are quite lively for being in their 70s!


That's just thing. I think there's this view/assumption that they're all in their 70s, and in some shops that may be the case; but this is an active system actively managed and enhanced/developed. These are normally aged developers :-)

(Heck, I did about 18 months of "some COBOL" along other stuff for a job right out of university. It's... fine? It's a language:)


Is it hard to recruit younger devs? Also, I heard that the hard part of using Cobol is that almost every one uses a different "flavor" of Cobol, with different tool chains and most of the ecosystem is proprietary. Is that still true?


It's true, but way less true than for any other modern programming language I've seen today with our fancy frameworks and models and libraries and methodologies changing radically every 12 minutes :).

That being said, the hard part of COBOL is not COBOL, or even tools/framework. Hard part of COBOL is that it's primarily a business language, as per name. If you're coding in COBOL, you're unlikely to be developing a new webserver or compiler or webpage or whatnot. You are unlikely to be focusing on technology and agnostic to the business domain it's supporting: In fact, you're coding key critical business logic, so you really need to be 30% good at COBOL, and 70% good at understanding business, talking business requirements, and eliciting business requirements from extremely non-programing people.

Again, cannot speak for the full COBOL ecosystem, but I have never recruited a "COBOL person for just COBOL" (other shops I'm sure have). It's something that sneaks up on you - maybe it's one of 3-4 things you need to work on a new job, or maybe there's an emergency legislative change and James is on vacation so Fatima looks at the COBOL program and changes a simple tax formula on it, and bam, now Fatima is the local COBOL guru and gets more and more COBOL assignments until she is de facto actual COBOL expert :).

More specifically, I have worked over my career on a lot of PeopleSoft systems - ERP applications for Payroll, Finance, CRM, EPM, etc. Almost all languages in it are if not proprietary, than weird and unknown to HN audience: PeopleSoft and PeopleCode application engine, SQR (Structured Query Reports) and COBOL. The application itself is made in standard languages, but updating and customizing the business logic for decades of ownership is primarily in those 4 languages. So you don't become a COBOL developer, you become a PeopleSoft developer and COBOL is one of the pillars of it. Thus, I don't hire COBOL developers, I hire PeopleSoft developers.

Hope that helps :)


The punched card reference is most apt for COBOL, reason being the formatting and those limits dictated the column restrictions - hence COBOL historically had 80 columns max, and code sure does legacy when it's that mature.


It's ironic that the article later on mentions nonchalantly that there's an IDE built as a VS Code extension.


Indeed, COBOL 2023 has just been ratified.


In the late 90s I worked for a vendor of CRM software. A fair bit of billing and payment-handling code was written in COBOL. Wasn't mainframe - ran on a couple of flavours of proprietary Unix. The COBOL compiler vendor was Microfocus.

I didn't have any training in COBOL, but found it pretty easy to read and understand - at least for the fairly simple business logic in a billing system. I didn#t have to write anything - just do some debugging when it didn't output as expected. Wouldn't want to do anything too mathsy or heavy string processing with it, but it seemed a good fit for the application.

I did some more work for the same company later. A descendent of that software is still running today. At some point between 2000 and the late 20-teens they migrated to Linux, and I think at that point used some sort of COBOL-to-C transliteration software, and the Microfocus compiler was jettisoned. Not sure if the decision was because Microfocus was very expensive (I have heard that, but have no personal experience of it), or just didn't support Linux at that time.

That transliterated code is still running today, but a bit of a nightmare to maintain. If GNU Cobol has been mature enough whenever that migration happened, I suspect it would have been a much better approach than transliteration. Too late for that code base though.


The "mathsy" situation can be more difficult than most think.

GNU COBOL uses GNU MP by default for calculations, instead of IEEE-754, which came decades later.

Not understanding the math of COBOL has led to many failed porting attempts.

https://medium.com/the-technical-archaeologist/is-cobol-hold...


I hate that it's spam-walled but that Medium article sure was a riveting read.

Basically the conclusion is that for mainframe systems that need to process lots of transactions fast the performance of COBOL is hard to beat. Languages like Java are not even very well suited for these type of calculations since BigDecimal is not part of the core programming idiom.

With the additional cost that migration would carry, it's actually less risky and more cost effective to keep maintaining the COBOL system, even if it means paying in-house to train programmers in this ancient technology.


"And when you understand how Java does math and how COBOL does the same math, you begin to understand why it’s so difficult for many industries to move away from their legacy."

Would it have killed her to tell, how COBOL does the math? I would think that for financial transactions fixed precision arithmetic is used, so IEEE-754 compatibility is irrelevant.


> And when you understand how Java does math and how COBOL does the same math, you begin to understand why it’s so difficult for many industries to move away from their legacy

Java does math the same way as COBOL does math, if you use java.math.BigInteger, java.math.BigDecimal, javax.money, etc

Unlike (say) C++ or C#, Java doesn't have operator overloading, so BigInteger/BigDecimal/etc don't let you use ordinary operators such as + or *, instead you have to use method calls such as add() and multiply()

But, that's not really that verbose compared to Cobol's "ADD ONE TO X GIVING X", etc.

And if you really want + and * with BigInteger/BigDecimal/etc, you can use a JVM language which has operator overloading, such as Kotlin or Java


I like decimal math, and I don't think it's well supported in many languages.

I also like AppleScript's crazy English (or French or Japanese) syntax, and I can't help but wonder if it (and HyperTalk) were partially inspired by COBOL.


The vast majority of COBOL in production runs on IBM mainframes in conjunction with JCL (Job Control Language). If you are looking to offload COBOL from a mainframe to a cheaper platform JCL is a must. I love that this project exists but it’s only one half of a solution to migration off of a mainframe.


Talking with mainframe guys for the last half decade, they seem to avoid JCL when they can and treat REXX like a super power.


Are these mainframe developers or system level guys. I wouldn't use REXX to run production batch processes. I would use REXX for TSO utilities and ISREDIT macros. JCL is pretty much a must for running batch processing. It is waaaaay more simpler and re-startable than trying to do the same thing with REXX.


Yes, but you use the REXX to generate the JCL.


Hmm. I guess I am just not seeing the use case but not saying there isn't one. JCL, isn't hard to do. You usually just copy 80% of it from other jobs and change dataset names, etc...


Yes, automatically :)


When I last used COBOL (early 90's) we had a heap of JCL, for sure, but the real-time system used CICS for everything.

Is it still a thing? What's the FOSS solution for transaction monitoring?


I’m unfamiliar with JCL, but from a quick search it sounds like most scripts don’t use much of the language. Still, I’d bet that most of the JCL functionality is used if you look at a decent-sized collection of scripts.

How much of the full JCL do you think would be necessary to reimplement in order to get, say, 30% of the existing scripts to work?


From memory of working with mainframe programmers back in the 1990s, it isn't just JCL you need. Cobol programs typically used databases and transaction monitors as well.

If you're lucky the database will be one of IBM's SQL databases. If you're unlucky it will be something like IMS.


You just reminded me that in the mid-90s I took a TCP/IP workshop where the other attendees worked on mainframes. The chasm between what I was familiar with and what they were was impressively wide.


I have heard horror stories about IMS but have been fortunate to never have to use it. DB2 is pretty decent and very reliable.


IMS is a hierarchical database.

Under OS2200, DMS is also a hierarchical database, and (unfortunately) we use it.

My developers have described it as the Fort Knox of databases, being very difficult to get data out.


But you need a modern enterprise-quality framework. Can it run Cobol on Cogs?

http://www.coboloncogs.org/INDEX.HTM


"Compliance-wise, it passed 97% of COBOL 85 conformance tests, a success rate not yet achieved by proprietary vendors, Sobisch boasted."

Thinking about that if you step back is amazing, to think a standard written in 1985 (I was doing COBOL back then into the 90s) has yet, even today not had any of the big names/players meet full compliance nearly 40 years later.

I'd love to read more about that aspect


Same applies to SQL, FORTRAN, JSEngines, POSIX, even C and C++ compilers, if you bother to go through all the little letters legalese on the standards and cross check with their latest implementations, outside the big three it is even worse.

It appears everyone only strives with good enough, fine tunes with bug reports for lesser known features and that is about it.


From an SQL background, this doesn't really surprise me at all. PostgreSQL is the closest anyone's ever got to being fully ANSI SQL compliant, with most of the proprietary vendors having rather paltry compliance numbers.


>I'd love to read more about that aspect

See the

  DATA DIVISION. 

  FILE SECTION.


COBOL's boilerplate doesn't seem that much worse than what Java programmers lived with for decades.


The big difference is I think Cobol typically (but not always) requires knowledge of the mainframe as well, while Java more or less requires some basic OS knowledge, JVM, and all the boilerplate.

Edit: mainframes seem neat, but I wouldn't want to code in either personally.


They are both verbose. But cobol is more data definition oriented.


Don’t miss the GnuCOBOL FAQ and How To, perhaps one of the largest such documents ever compiled:

https://gnucobol.sourceforge.io/faq/index.html

(may cause mobile browsers to crash while loading)


> the past three years, it has received attention from 13 contributors with 460 commits.

That doesn’t sound like very many commits. Pretty mature!


> There’s no support yet for objects or messages in GnuCOBOL. > Objects was “a nice feature from COBOL 22, which isn’t used that much,” Sobisch said. > Messaging just got reimplemented recently, and is still a new feature for the COBOL crowd to grapple with, Sobisch said. So, no support in GnuCOBOL yet.

COBOL 2022. My god.


That hello world program is brutal. Truly, the language is the first of its kind.

Hopefully this helps fleets of developers bring their legacy code (more legacy than most of us can imagine) into a modern stack, and start applying modern SWE principles to it.


It's logical and it makes sense. I would argue, at first glance, that C's hello world (#include <stdio.h>, int argc, char ** argv, etc) would be more confusing to a total newbie. Java's even more so.


Well that's exciting :)


It’s interesting but will anyone actually use it? z/OS for example doesn’t have a hierarchical file system. The version of COBOL they bundle supports data sets I’d imagine. Legacy folks will be hesitant to switch to it as well. When I worked at a bank they couldn’t even use the latest Java, due to compliance and regulatory reasons.


one of my subjects at uni in the 90's (IT degree) required me to learn cobol. god i hated that class. thankfully i dont remember much of it. it was actually before we learn't c/c++, also VB was a big thing then too.


Very exotic. Does it support multi threading? What kind of things can be done with it?


I'm guessing it does money math in a certain way vetted by this field that would be very difficult to recertify on some replacement.


not the COBOL language, but the IBM mainframes have five nines of availability, duplication of every component, how swappable everything from power supplies to CPU and RAM.

Can get to more than five nines if used parallel sysplex, but the big iron is incredibly reliable. This is the reason why all banks, airlines, and other old-school businesses have been running mainframes and cannot ever migrate off of them.

this stuff was engineered like it was an airplane full of people


And yet my bank very regularly goes down for (scheduled) technical maintenance.

Most mom-and-pop stores running Wordpress on shared PHP hosting are more reliable than that.


A bank based in my region, one of the biggest in Europe, uses IBM mainframes and has hundreds of thousands of COBOL programmes. Their availability is disastrous. They need to regularly reboot LPARs (logical partitions of the mainframe) because they're so brittle, resulting in a segment of the clients unable to use online services. And on the other hand, services that need to stay available, like websites or instant settlement, obviously don't run on mainframes. That means that the website is up... but only displays a maintenance message.


It's the servers that go down! Not the 'frames. Those will go down maybe once a year for maintenance, if that. The mainframe engineers will brag your ears off about that and about the unreliability of the "distributed" systems (a.k.a. servers, a.k.a. what everyone else uses).

When you log on to your online banking you're interacting with servers, not with the mainframes directly. The servers are the interface, the mainframe is the, let's say far back end. That will be handling millions of transactions a second while the servers are down for maintenance- like handling payments and transfers etc, not just online banking.

Which is why it can't keep failing every few months or so.


I definitely remember my bank warning me about credit and debit card unavailability, so it's definitely not just the online systems.

If cards don't work and the online systems don't work, I don't know what else does. Those maintenance periods usually happen at night, so branches aren't open and wire transfer systems don't work.


> […] my bank warning me about credit and debit card unavailability […]

Your bank might have meant card balances being unavailable in the online banking which is more plausible than cards being generally unavailable for payments. It is not indicative of the mainframe actually being down (although not entirely improbable, either), and it is more likely that an intermediary (a service or a server) was undergoing maintenance.

Payment processing involves multiple tiers and multiple routing layers built into it – to ensure very high availability and that a payment is almost always guaranteed to process (successfully or otherwise – does not matter). Payment networks also impose stringent technical requirements onto the banks connecting to them. A Raspberry Pi running Slackware Linux from 1993 and powered by a dangling street pole wire would not be allowed to connect, for example.

Your bank (or mine, for the sake of the conversation) is the terminal point in this whole payment processing chain, with the payment network (Visa, Mastercard but not AmEx[0]) being the port of entry for a payment. Depending on the country, a country may have its own local payment Visa / MC processing centre and if that is the case, local payments will be routed to the local card payment processor. Otherwise, a global Visa/Mastercard will assume the payment. Then the payment network contacts your local bank to authorise the payment. Depending on the nature of the failure + other factors, the Visa/Mastercard can authorise certain payments on behalf of your bank if they fail to reach your bank and will forward the payment particulars onto your bank so that your bank could correctly process your card payment later. It is more common in overseas payment scenarios, i.e. you are travelling overseas and especially so when travelling outside the first world countries.

Payment networks and banks do not like such situations and actively loathe non-real time payment authorisations, yet they allow them for a narrow number of use cases at their own discretion.

[0] AmEx own their own global payment network and do not allow other financial institutions to gain access into it.


Maybe your bank goes down, but the Visa and Mastercard systems have been processing card payments 24/7 non stop and I dont even remember hearing a single incodent when system was down even partially


I don't know the particulars about your bank, obviously but the thing is, mainframes serve so many millions of clients, and that's clients of the largest money transfer networks (Amex, Visa, Master, everyone really) that if there's an outage it will make the news, internationally, and it will be front page news too. With live updates.

In fact, if I think about it, I don't think I remember any time when a serious outage that was eventually explained in the press was the fault of some mainframe going down. Usually it's something else, like someone misconfigured something or something didn't update correctly etc. stuff that sounds a lot like day-to-day web dev stuff.

So I think maybe it was something else that went on with your bank, that only affected your bank. Like inkyoto says below, maybe some DNS went down?


That sounds like a shit bank. I can recall only a single system-level outage at my bank in 20 years.


> IBM mainframes have five nines of availability

This is incredible. I miss that time, when seen with naked eye difference of reliability of mainframes.

Now, I know many organizations using Erlang/OTP or Java EE, and mostly think them good enough comparable to mainframes environment and few times cheaper.

BTW Erlang in British Telecom achieve five nines reliability.

But must admit, Erlang/Java are not best suited for finance/lawyer applications, and I don't hear about finance/law libraries for Erlang, so this could be issue (or opportunity for somebody, depend on how you look on it).


lol… this comment just makes me remember that how likely you are to die is determined by an estimate of what a company can get away with by an actuary.


And your comment just made me wonder if the two recent Boeing crashes actually mean the IBM hardware is even *more* reliable


recent scandals with Boeing and United is the result of trump deregulating airlines. https://www.npr.org/2020/12/03/942345240/trump-administratio...

a lot of regulation just became "self-certify" and companies obviously cut costs and outsourced everything including fleet maintenance.

this article is from 2015 - but things got way worse since then https://www.vanityfair.com/news/2015/11/airplane-maintenance...

European airlines and Airbus are flying just fine, it is Americans that have a problem due to over politicized and polarized society and corrupt politicians


The MAX was certified less than 2 months after Trump took office. He had nothing to do with it.


To Trump's credit he took action against the 737 Max in a timely manner and came out against Boeing as hard as I believe a POTUS could.


IBM hardware is reliable, BUT, IBM avoid to go to airspace industry as flight computer supplier.

After Saturn-5, flight computers made by other companies. Its hard to understand, if this is because conservative regulations, or because not enough money for such big company, but fact, that IBM electronics near absent from large civilian airplanes.


Except for the IBM 737 mainframe.


Modern cloud backend data processing is 5 9s reliable too, for very core stuff, not all the app frontends and minor features.


Modern cloud “5 nines” isnt a promise of availability but a promise to get some store credit each month when it inevitably fails (adjusted down for usage because you dont use all that capacity you pay for so we just credit for the percentage you do).

The trick is to make sure when you promise 5 9s to your customer that you build in the same bullshit clauses so you dont get left holding the bag.

Its gross


You don't even have the assurance that the RAM is ECC on cloud...


...that wasn't made by Boeing in the last 20 years.


For some value of "very difficult"


I award them no points, and may God have mercy on their souls.


This exists because most modern rewrites of COBOL systems have been rewritten several times over despite the rewrites never finishing all the functionality of the original.

Modern day software engineers need to eat a huge slice of humble pie.


What took so long?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: