Neither Visa nor Mastercard really implement ISO 8583 a standardized way. Which means they each issue many thousands of pages of documentation covering not only which of the standard fields they use and how, but also how they cram their proprietary data into the messages. Most card management/issuance platforms do a decent job of abstracting this away though.
Transition to ISO 20022 would be a positive improvement, but I don't think it will ever meet the required ROI threshold (globally) for that to happen.
Can attest, having searched through literally thousands of pages of documentation in an attempt to attribute the payment processing switch vendor when analysing the ATM jackpotting malware ‘fast cash for Linux’[1]. The best I could do was determine the currency used for the fraudulent transactions, which may imply the country of the target financial institution.
Would be curious if anyone else has further insights.
The large card networks have so many proprietary behaviors and extensions that I really doubt whether any common standard would even make sense at this point.
And if you look at how "modern" ISO 8583 is evolving, almost all changes and extensions are already happening in TLV-like subfields (where a new field unexpectedly appearing doesn't make existing parsers explode spectacularly), and the top-level structure is essentially irrelevant.
Of course, it's a significant hurdle to newcomers to get familiar with that outer layer, but I don't get the sense that catering to these is a particular focus by either network. ISO 8583 is also a great moat (one of many in the industry, really) for existing processors, which have no buy-in to switch to a new standard and the networks need to at least somewhat keep happy.
I thought that chip-in EMV was bad until I saw some of the stuff coming out of Discover cards for contactless EMV. Buying a test card set from somewhere like B2 Systems was very beneficial even just integrating an EMV reader from a hardware device to a payment processor.
In this world and age of AI, having this kind of inside knowledge that is scattered, usually behind paywall and nda, and always to be updated, is a real advantage.
Because no LLM will be able to replace you for quite a while.
You're right but that's because it's already come to this. Would it have been that hard to say: these are the standardized fields usable only in accordance with the standards and these are the custom fields for your own bs.
I don't know the current state of affairs. Last time I worked on ISO20022 (almost 10 years ago), our system were doing a 1-to-1 mapping from ISO8583, keeping every bit of unmaintable shit one could imagine
ISO 20022 roll-out is well underway. Unless the US decides to extend it's war on the world to the rest of G20 the plan is to be done a year from now, and if I'm not mistaken the US is a member of the PEPPOL society already.
It's the lingua franca of european banks and has been for some time. Back in 2018 when I built a piece of financial software I talked ISO 20022 with a swedish bank in Luxembourg.
This is not the case for card networks. I know of no plan for Visa or Mastercard to move to ISO20022 and even if so I am certain it will not be complete within a year from now.
In fact, if they announced they were starting a migration like that, I would be dubious if it could be completed within 10 years, there are so many systems out there that would have to change.
On many other payment systems, yes, ISO20022 is or is becoming the lingua franca - e.g. FedWire is going to move next year.
In 2018 SWIFT decided to migrate. Do you seriously believe that VISA and Mastercard did not notice this when it happened? Do you think they've been watching India adopt ISO 20022 for years and not acted upon it?
Edit:
The reason adoption is fast when the devs finally can get to work is that it's XML, you get schema files and punch your programming button and generate a lot of the necessary code and then do the plumbing and call it a day.
Those papers are not concrete plans to move their core processing network to ISO20022. The first one is just talking about 20022 in general, the second one refers to Visa DPS which is effectively a wrapper over their EAS which does speak 20022, but their core comms are still 8583.
I'm sure that Visa and Mastercard are very aware of 20022, but being aware of it isn't the same as having a concrete plan to move - actually moving _everything_ would take a very long time, there are so many card issuers & acquirers out there with old systems plugged into Visa and Mastercard that would have to be replaced.
FYI I actually built a cloud based issuer processor connected to one of them within the last couple of years - that was 8583 and there was no option for it to be 20022. We would 100% have taken it if it were an option.
> you get schema files and punch your programming button and generate a lot of the necessary code and then do the plumbing and call it a day.
I think that's pretty naive in terms of what parts you have to do in order to process card payments. Okay, yes, parsing messages is easier, you still have to deal with HSMs and all the crypto stuff, PCI compliance, all the logic for the various message types, scheme compliance, then the long tail where reality diverges from the spec (basically acquirers will send you any old absolute nonsense and you'll have to somehow figure it out otherwise your customers' card payments get rejected).
Why would it matter to Visa and Mastercard what SWIFT and India (The central bank? The entire country?) are doing?
They run their own networks and everybody that wants to connect to them has to speak their protocols (which are completely custom btw; it’s out of the question to just swap out one for the other!)
> get schema files and punch your programming button and generate a lot of the necessary code and then do the plumbing and call it a day
Absolutely not. Parsing ISO 8583 is maybe 5% of the complexity of card processing (and that’s being generous). Sorry, but you seem to have absolutely no understanding of an industry you are making confident statements about.
Because they interact with banks and banks interact with them. If they refuse to support the protocols the banks use, what happens?
Sure, there's 3DS and blah blah blah, so what? 8583 is getting replaced and implementing 20022 is a breeze compared to 8583 for the specific reason I mentioned.
Sorry to be so direct, but you don’t seem to have any idea what you are talking about in this context.
Banks and payment card processing (which is what TFA is about) are basically two different worlds. One switching to a new data interchange format has essentially no consequences to the other.
While I could imagine Visa and Mastercard offering an ISO 20022 interface for new integrations, I’m willing to bet on the majority of volume staying on ISO 8583 for this decade and probably well into the next. They most certainly won’t force anyone to migrate either.
You don't have to imagine, just go read what the payment processing services publish and promote.
ISO 8583 is a massive liability, just an insane amount of technical debt and nasty workarounds that harms interoperability, i.e. profits. This is why both banks and the payment sector communicate so aggressively on this issue.
Having been involved in several ISO8583 implementations/integrations, it's really quite wild how different each one was in both structure and required content from one another.
The type of protocol (message type, bitmap to define fields, followed by a set of fixed and variable length values) is pretty normal for the time it was developed in. Many low level things are basically packed C-structs with this type of protocol. It comes with some pitfalls on the receiver side to be careful validating dynamic field length and refusing to read past end of message or allocate an infinite buffer. But all of those are well understood by now.
What I find baffling is that this "standard" does not specify how to encode the fields or even the message type. Pick anything, binary, ASCII, BCD, EBCDIC. That doesn't work as a standard, every implementation could send you nearly any random set of bytes with no way for the receiver to make sense of them.
> Many low level things are basically packed C-structs with this type of protocol.
Not really: C structs notably don't have variable-length fields, but ISO 8583 very much does.
To add insult to injury, it does not offer a standard way to determine field lengths. This means that in order to ignore a given field, you'll need to be able to parse it (at least at the highest level)!
Even ASN.1, not exactly the easiest format to deal with, is one step up from that (in a TLV structure, you can always skip unknown types by just skipping "length" bytes).
> Not really: C structs notably don't have variable-length fields
Feast your eyes: C99 introduced an ~~abomination~~ feature called flexible array members (FAM), which allows the last member of a struct to be a variable length array.
If you want to ~~gouge you eyes~~ learn more, see section 6.7.2.1.16 [0].
> To add insult to injury, it does not offer a standard way to determine field lengths
That's awful. You can sort of say the same about variable length struts in C, but at least the strict tupe definition usually has a field that tell you the length of the variable length array at the end.
> [...] feature called flexible array members (FAM), which allows the last member of a struct to be a variable length array.
Oh, ISO 8583 has these too!
Sometimes they're even combined with the "feature" described in the article where there's a variable number of fixed-length elements, except for the last element, which is a variable-length string (or sometimes the last field type repeated n times). That's always "fun" to work with.
ISO 8583 really is a living museum of all ideas people had about binary encoding in the last half century or so.
As far as I'm concerned, we solved binary formats with ASN.1 and its various encodings. Everything afterwards has been NIH, ignorance, and square wheel reinvention.
ASN.1 DER, BER, or OER? Implicit and optional can really break compat in surprising ways. Then there are the machine unfriendly roster of available types. XDR was more tuned for that.
Finally free tooling doesn't really exist. The connection to the OSI model also didn't help.
Or XER or JER! One of the brilliant things about ASN.1 is that it decouples the data model from the serialization format. Of the major successor systems, only protobuf does something similar, and the text proto format barely counts.
> Implicit and optional can really break compat in surprising ways
Any implementation of any spec can be broken. You could argue that the spec should be simpler and require, e.g., explicit tagging everywhere, like protobuf. Sure. But the complexity enables efficiencies, and it's sometimes worth making a foundational library a bit more complex to enable simplifications and optimizations throughout the ecosystem.
> Then there are the machine unfriendly roster of available type
Protobuf's variable-length integers are machine-friendly now? :-) We can always come up with better encoding rules without changing the fundamental data structures.
> Finally free tooling doesn't really exist.
What do you mean? You use ASN.1 to talk to every server talking SNMP, LDAP, or the X.509 PKI. Every programming environment has a way to talk about ASN.1.
> The connection to the OSI model also didn't help.
Agreed. The legacy string types aren't great either. You can, of course, do ASN.1 better. No technology is perfect. But what we don't need, IMHO, is more investment in "simple" technologies like varlink that end up being too simple and shunting complexity and schema-ness that belongs in a universal foundation layer into every single application using the new "simple" thing.
My opinion is that DER is better. (However, DER is a restricted form of BER; any DER file is also a valid BER file, but has certain requirements of the encoding, so that it is a canonical form (the other canonical form is CER, but my opinion is DER is better).)
> Every programming environment has a way to talk about ASN.1.
Not all implementations are well-designed, though; I can see many implementations of ASN.1 that are not well-designed. I made up my own, to hope to be better, but we will see what is (hopefully) better.
> But what we don't need, IMHO, is more investment in "simple" technologies like varlink that end up being too simple
I agree with this, and it is important. This is my intention when I was designing stuff, to not be too simple nor too complicated; most stuff I see tends to be either too complicated or too simple, so I try to make stuff better than that.
> ASN.1 compiler and compile-time support functions
> The ASN.1 compiler takes an ASN.1 module as input and generates a corresponding Erlang module, which can encode and decode the specified data types. Alternatively, the compiler takes a specification module specifying all input modules, and generates a module with encode/decode functions.
I think ASN.1 is good but there are some problems with it. I think that it should not need separate type numbers for the different ASCII-based string types and separate type numbers for the different ISO-2022-based string types; you can use one number for ASCII and one number for ISO-2022; the restrictions will be a part of the schema and should not be a part of the BER/DER. Furthermore, I think they have too many date/time types. Also, some details of the other types (e.g. the real numbers type) are more messy than they should be if they are designed better.
I had made up the "ASN.1X", which includes some additional types such as: key/value list, TRON string, PC string, BCD string, Morse string, reference, out-of-band; and deprecates some types (such as OID-IRI and some of the date/time types; the many different ASCII-based and ISO-2022-based types are kept because a schema might have different uses for them in a SEQUENCE OF or SET OF or a structure with optional fields (even though, if I was designing it from the start, I would have not had many different types like that)), and adds a few further restrictions (e.g. it must be possible to determine the presence or absence of optional fields without looking ahead), as well as some schema types (e.g. OBJECT IDENTIFIER RELATIVE TO). (X.509 does not violate these restrictions, as far as I know.)
I also have idea relating to a new OID arc that will not require registration (there are already some, but this idea has some differences in its working including a better structure with the working of OID); I can make (and had partially made) the document of the initial proposal of how it could work, but it should need to be managed by ITU or ISO. (These are based on timestamps and various kind of other identifiers, that may already be registered at a specific time, even if they are not permanent the OIDs will be permanent due to the timestamps. It also includes some features such as automatic delegation for some types.)
There are different serializations formats of ASN.1 data; I think DER is best and that CER, JER, etc are no good. I also invented a text-based format, which can be converted to DER (it is not really meant to be used in other programs, since it is more complicated than parsing DER, so using a separate program to convert to DER will be better in order to avoid adding such a complexity into programs that do not need them), and I wrote a program that implements that.
A bitmap to define field presence doesn’t seem so offensive, as far as serialization formats go. FlatBuffers[1] use a list of offsets instead, but that’s in the context of expecting to see many identically-shaped records. One could argue that Cap’n Proto with zero-packing[2] amounts to the same thing if you squint, just with the bitmap smeared all over the message.
I mean, this specific thing sounds like it should have been a fatter but much more unexciting TLV affair instead. But given it’s from 1987, it’d probably have ended up as ASN.1 BER in that case (ETA: ah, and for extensions, it mostly did, what fun), instead of a simpler design like Protobufs or MessagePack/CBOR, so maybe the bitmaps are a blessing in disguise.
I'd trade the field layer of ISO 8583 for some ASN.1 any day!
Luckily, there's a bit of everything in the archeological site that is ISO 8583, and field 55 (where EMV chip data goes, and EMV itself is quite ASN.1-heavy, presumably for historical reasons) and some others in fact contain something very close to it :)
Telegram's "TL" serialization, that's part of its network protocol, also uses a bitmap for optional fields. It's an okay protocol overall. The only problem is that the official documentation[1] was written by Nikolay Durov who is a mathematician. He just loves to overgeneralize everything to a ridiculous degree and spend two screens worth of outstandingly obtuse text to say what amounts to, for example, "some types have type IDs before them and some don't because the type is obvious form the schema".
A lot of payments chatter on here recently and patio11 throwing out some great content as well. May I ask where this pretty visual explanation website was 25 years ago? ;) Oh the woes of programming ISO8583 as I see another commented on EBCDIC which adds in a whole other level of mind numbing when passing between the endians. It was a fun experience however back in the early 2000s when I worked in isolation with Discover card to get the GUID field added to the ISO8583 specification.
We are living in changing times on many fronts and the worlds financial systems is one of those new battlefields. Many are ignorant as to what is occurring but with big tech owning their own payments ecosystems this should be insight for others not aware as we are absolutely certain to see more following their lead. Some of those others following are entire countries, they are just a bigger business after all, as it is already happening for those aware and a small select few are doing i.t.
I referred to it as "the woes" but yes I agree with your choice of words as well. As an experienced technology builder I generalize the struggles I vividly recall and it often involves parties not doing the real work and therefore they lack the comprehension of the details that are all that truly matters. This applies to a lot of things today, tech included, where most have no idea how anything works and the solution typically boils down to, did you power cycle it?
I learned a lot more about this discussing the PCI/DSS [0] regulation
framework here [1]. It's about to change to a new 4.0 in 2025 which
means that to use or run any payments system you'll have to meet ever
more stringent regulation. This is going to start applying to other
pseudo currencies (in game value tokens etc) if they exceed certain
value and scale. At present Visa and Mastercard have a big stake in
defining this (capturing the regulator).
Interestingly local real (non-digital) currencies like the Brixton
Pound [2] and other local paper scrip seem to escape this, which seems
a boost for paper technologies.
PCI-DSS is an industry standard, not a law. If you don't think it should apply to your domain, complain to your legislators/regulators, not the authors of PCI-DSS or the payment industry covered by it!
> Interestingly local real (non-digital) currencies like the Brixton Pound [2] and other local paper scrip seem to escape this
And so do countless other digital (non-real?) payment systems across the globe. That's not to say that there aren't any other security regulations, but they're also most certainly not in PCI scope.
Arguably, the original sin of the card payments industry in particular, and US American banking in general, is treating account numbers as bearer tokens, i.e. secret information; if you don't do that, it turns out that a lot of things become much easier when it comes to security. (The industry has successfully transitioned of that way of doing things for card-present payments, but for card-absent, i.e. online, card transactions, the efforts weren't nearly as successful yet.)
- PCI DSS 4.0 is already in place and to be retired on December 31, 2024. PCI DSS 4.0.1 is the replacement and I place already.
- PCI DSS 4.0.1 and game tokens have nothing in common. The applicability of PCI DSS requirements are decided by card brands, aka Visa, Mastercard, etc. And it is the acquirers to enforce on the third party service providers to enforce the standard. Standard itself has no power on anyone.
- Mastercard and Visa have high stakes because technically they are the regulators. EMV Co, the core of the payments was built by Europay (later acquired by Mastercard), Mastercard and Visa. The M and V of it are managing the chip on cards, online payments and much more. PCI SSC is merely a supervisory authority who sets the standard, the process of assessments and investigations on behalf of these brands.
Side note: While the other card brands accept PCI DSS as an entry level requirement, they do not have as much saying on it as Mastercard and Visa.
Oh, this format was fun. You could see history unfold when parsing it. The messages I parsed were ISO-8583 with ~EBCDIC~ no, BCD. But one field contained XML. And the XML had an embedded JSON. The inner format matched the fashion trend of the year when someone had to extend the message with extra data. :-)
> The messages I parsed were ISO-8583 with ~EBCDIC~ no, BCD.
The "great" thing about most ISO 8583 implementations I've worked with (all mutually incompatible at the basic syntactic level!) is that they usually freely mix EBCDIC, ASCII, BCD, UTF-8, and hexadecimal encoding across fields.
It has been fun seeing all the different ways companies have come up with to work around the limitations of ISO 8583. One I’ve been seeing a lot lately is making an API call before/after the ISO message (with non-PCI data) to confer additional information outside of the payment transaction. Definitely speeds up time to market, but opens up a whole new array of failure modes to deal with.
I got my last company certified with Visa and Mastercard for authorization and clearing. It's funny how they call it a standard but it's anything but that. There were some similarities but a lot of subtle differences which made the process 10X harder. Mastercard was the worst to deal with.
Unlike Visa and Mastercard, I noticed that AMEX transaction notifications are near-instantaneous. There is something so magical about a notification popping up on my phone/watch literally the second i swipe a card. I always wondered about the layers on the stack which V/MC must have which AMEX doesnt.
> Unlike Visa and Mastercard, I noticed that AMEX transaction notifications are near-instantaneous.
No idea where you bank, but my Visa notifications are instantaneous, so the network is clearly capable of that. I'm with a modern European bank though, I wouldn't be surprised if the mainframe-lowing US banks that do everything via overnight batch jobs are incapable of this.
With that said, there are places which straight up won't send your transaction to the network at purchasing time. Apple[1] is one notable example. They seem to have a cron job that runs at midnight and does billing for that day. This is really annoying if you're buying an expensive Apple product and increase your card limits for one day only.
I've even seen places that do extremely-low-value transactions "on faith" - the transaction is entirely offline, it gets send to the network the next day, and if it is rejected, your card number goes on a blacklist until you visit the company's office in person and settle the debt.
> I'm with a modern European bank though, I wouldn't be surprised if the mainframe-lowing US banks that do everything via overnight batch jobs are incapable of this.
I wouldn't be surprised if your European bank still relies quite heavily on its mainframes. The mainframe offers high availability, reliability, and, contrary to popular belief, high throughput. Batch processing is just one thing they're good at; real-time processing at high speed is another.
It's probably less about layers and more about the different number of stakeholders.
Visa/MC transactions go through at least four systems (merchant, merchant payment service provider/acquirer, Visa/MC, issuer processor); Amex is merchant acquirer and card issuer in one, so there is no network and accordingly at most three parties involved (merchant, merchant PSP, Amex).
That said, some of my Visa/MC cards have very snappy notifications too. In principle, the issuer or issuer processor will know about the transaction outcome even before the merchant (they're the one responding, after all), and I definitely have some cards where I get the notification at the same time and sometimes before the POS displays "approved".
As others say, it's not a matter of Visa VS Amex. I use both a Mastercard and a Visa with a neobank in Europe, and I get instant notifications. Must be more something to do with the bank (US banking is famously so behind, but I also see days-long delays with traditional european banks).
Even more magical: when sending money to someone I'm physically present with, I hear their notification before the "money sent" animation finished on my own phone
Unlike Visa and Mastercard, I noticed that AMEX transaction notifications are near-instantaneous. There is something so magical about a notification popping up on my phone/watch literally the second i swipe a card. I always wondered about the layers on the stack which V/MC must have which AMEX doesnt.
Must be your bank, because both my Visa and MasterCard ping my phone instantaneously, too.
Some smaller banks upload available balances to processors and perform clearing later in a back office only. They just don't have a hook to link a notification and send it only after the actual settlement.
>> Must be your bank, because both my Visa and MasterCard ping my phone instantaneously, too.
Well thats sort of the thing...with Visa and MC, there is an extra layer or two of the bank or Fidelity Information Services. With Amex, they own the full stack end to end.
Your card limit gets checked on every transaction. There doesn't seem to be a technical reason why information flow back to me should be limited in any way. If the extra layer fails to work the transaction fails to pass.
> Your card limit gets checked on every transaction.
Nope. The merchant can choose the level of verification - in some cases, like copying the card with an imprinter [1] or running phone transactions (yes, that is possible - it's called MOTO [2]), it's obviously impossible to check card limits.
Downside of CNP transactions is, the merchant is fully liable for anything from fraud over chargebacks to exceeding limits.
And then you got card-present transactions but the network connectivity is down for whatever reasons... been a while since I messed with that, but at least for German cards you could configure the terminal to store the account details for later submission when connectivity was restored.
I remember being charged after a while when paying for bus/metro tickets in some places, I think those machines process transactions by batches or something.
It's a per country thing. Card txns in the US are bananas arcane byzantine nightmares. (Source: worked at Canadas largest bank on txn processing software).
Great article that exposes why ISO20022 will replace 8583 over time, especially in areas not dominated by the M/V monopoly networks.
Credit cards, with all their nonsense about cash backs and rewards can be imnplemented in the new payment systems with banks offering line of credit accounts that are part of the appropriate "national payment system", like UPI, PromptPay, Osko/PayID, FedNow etc.
Instant settlement between accounts, low cost fixed price txns etc.
Fun anecdote: Thailand's entire banking network (including regular wire transfer) was implemented with ISO 8583 (!). Part of the AnyID master plan (later renamed to PromptPay) was to replace the country's (ab)usage of ISO 8583 with ISO 20022. The Ministry of Finance hired a UK-based Vocalink to build this converter, among with other systems MoF hired them for. (AFAIK, the entire stack was written in Erlang.)
An interesting side effect of this low-level bit mapping is that various banks authorization logics can be manipulated to increase auth rates by subtle bit flipping across various fields.
All the big fintech companies have ML running over changes to identify what results in the highest auth rates on a per bin basis.
I would be very surprised if bit flipping and ML were really used here, do you have any source?
While for sure there's a lot of signal and value in monitoring auth rates per BIN per payload, flipping bits can be extremely disruptive and counterproductive. From doing the wrong operation to being fined by the schemes, it's a lot of risk for not a lot of gain when these fields can be tuned ad-hoc for the few card issuers that deviate from the standard/norm.
So is ISO20022 the future then? There should* be a standard system that all the networks stick to... instead of the hodgepodge of bullshit there is now.
Fun times reviewing the masking logic of credit card data spewed out in system logs, in base64-encoded (or god forbid, EBCDIC-encoded base64-encoded) ISO 8583.
Neither Visa nor Mastercard really implement ISO 8583 a standardized way. Which means they each issue many thousands of pages of documentation covering not only which of the standard fields they use and how, but also how they cram their proprietary data into the messages. Most card management/issuance platforms do a decent job of abstracting this away though.
Transition to ISO 20022 would be a positive improvement, but I don't think it will ever meet the required ROI threshold (globally) for that to happen.
Can attest, having searched through literally thousands of pages of documentation in an attempt to attribute the payment processing switch vendor when analysing the ATM jackpotting malware ‘fast cash for Linux’[1]. The best I could do was determine the currency used for the fraudulent transactions, which may imply the country of the target financial institution.
Would be curious if anyone else has further insights.
[1] https://haxrob.net/fastcash-for-linux/
The large card networks have so many proprietary behaviors and extensions that I really doubt whether any common standard would even make sense at this point.
And if you look at how "modern" ISO 8583 is evolving, almost all changes and extensions are already happening in TLV-like subfields (where a new field unexpectedly appearing doesn't make existing parsers explode spectacularly), and the top-level structure is essentially irrelevant.
Of course, it's a significant hurdle to newcomers to get familiar with that outer layer, but I don't get the sense that catering to these is a particular focus by either network. ISO 8583 is also a great moat (one of many in the industry, really) for existing processors, which have no buy-in to switch to a new standard and the networks need to at least somewhat keep happy.
I thought that chip-in EMV was bad until I saw some of the stuff coming out of Discover cards for contactless EMV. Buying a test card set from somewhere like B2 Systems was very beneficial even just integrating an EMV reader from a hardware device to a payment processor.
The problem is that the contactless stuff is all custom per network.
Some of the implementations are reasonably close to contact EMV; others might as well be a completely different stack and technology.
In this world and age of AI, having this kind of inside knowledge that is scattered, usually behind paywall and nda, and always to be updated, is a real advantage.
Because no LLM will be able to replace you for quite a while.
Job security via obscurity.
You're right but that's because it's already come to this. Would it have been that hard to say: these are the standardized fields usable only in accordance with the standards and these are the custom fields for your own bs.
I don't know the current state of affairs. Last time I worked on ISO20022 (almost 10 years ago), our system were doing a 1-to-1 mapping from ISO8583, keeping every bit of unmaintable shit one could imagine
ISO 20022 roll-out is well underway. Unless the US decides to extend it's war on the world to the rest of G20 the plan is to be done a year from now, and if I'm not mistaken the US is a member of the PEPPOL society already.
It's the lingua franca of european banks and has been for some time. Back in 2018 when I built a piece of financial software I talked ISO 20022 with a swedish bank in Luxembourg.
This is not the case for card networks. I know of no plan for Visa or Mastercard to move to ISO20022 and even if so I am certain it will not be complete within a year from now. In fact, if they announced they were starting a migration like that, I would be dubious if it could be completed within 10 years, there are so many systems out there that would have to change.
On many other payment systems, yes, ISO20022 is or is becoming the lingua franca - e.g. FedWire is going to move next year.
The planning stage is history.
https://usa.visa.com/content/dam/VCOM/global/ms/documents/ve...
https://usa.visa.com/content/dam/VCOM/regional/na/us/sites/d...
Mastercard uses data sucking nag screens, but I don't think you actually need to read the papers to get the point:
https://b2b.mastercard.com/news-and-insights/payments-modern...
https://b2b.mastercard.com/news-and-insights/report/iso-2002...
In 2018 SWIFT decided to migrate. Do you seriously believe that VISA and Mastercard did not notice this when it happened? Do you think they've been watching India adopt ISO 20022 for years and not acted upon it?
Edit: The reason adoption is fast when the devs finally can get to work is that it's XML, you get schema files and punch your programming button and generate a lot of the necessary code and then do the plumbing and call it a day.
Those papers are not concrete plans to move their core processing network to ISO20022. The first one is just talking about 20022 in general, the second one refers to Visa DPS which is effectively a wrapper over their EAS which does speak 20022, but their core comms are still 8583.
I'm sure that Visa and Mastercard are very aware of 20022, but being aware of it isn't the same as having a concrete plan to move - actually moving _everything_ would take a very long time, there are so many card issuers & acquirers out there with old systems plugged into Visa and Mastercard that would have to be replaced.
FYI I actually built a cloud based issuer processor connected to one of them within the last couple of years - that was 8583 and there was no option for it to be 20022. We would 100% have taken it if it were an option.
> you get schema files and punch your programming button and generate a lot of the necessary code and then do the plumbing and call it a day.
I think that's pretty naive in terms of what parts you have to do in order to process card payments. Okay, yes, parsing messages is easier, you still have to deal with HSMs and all the crypto stuff, PCI compliance, all the logic for the various message types, scheme compliance, then the long tail where reality diverges from the spec (basically acquirers will send you any old absolute nonsense and you'll have to somehow figure it out otherwise your customers' card payments get rejected).
Why would it matter to Visa and Mastercard what SWIFT and India (The central bank? The entire country?) are doing?
They run their own networks and everybody that wants to connect to them has to speak their protocols (which are completely custom btw; it’s out of the question to just swap out one for the other!)
> get schema files and punch your programming button and generate a lot of the necessary code and then do the plumbing and call it a day
Absolutely not. Parsing ISO 8583 is maybe 5% of the complexity of card processing (and that’s being generous). Sorry, but you seem to have absolutely no understanding of an industry you are making confident statements about.
Because they interact with banks and banks interact with them. If they refuse to support the protocols the banks use, what happens?
Sure, there's 3DS and blah blah blah, so what? 8583 is getting replaced and implementing 20022 is a breeze compared to 8583 for the specific reason I mentioned.
Sorry to be so direct, but you don’t seem to have any idea what you are talking about in this context.
Banks and payment card processing (which is what TFA is about) are basically two different worlds. One switching to a new data interchange format has essentially no consequences to the other.
While I could imagine Visa and Mastercard offering an ISO 20022 interface for new integrations, I’m willing to bet on the majority of volume staying on ISO 8583 for this decade and probably well into the next. They most certainly won’t force anyone to migrate either.
You don't have to imagine, just go read what the payment processing services publish and promote.
ISO 8583 is a massive liability, just an insane amount of technical debt and nasty workarounds that harms interoperability, i.e. profits. This is why both banks and the payment sector communicate so aggressively on this issue.
correct. which is why people prefer to buy the 8583 implementations.
like https://jpos.org/
this is the way. Shove everything into field 47.
dear god will I never forget all of these terrible details
Having been involved in several ISO8583 implementations/integrations, it's really quite wild how different each one was in both structure and required content from one another.
The type of protocol (message type, bitmap to define fields, followed by a set of fixed and variable length values) is pretty normal for the time it was developed in. Many low level things are basically packed C-structs with this type of protocol. It comes with some pitfalls on the receiver side to be careful validating dynamic field length and refusing to read past end of message or allocate an infinite buffer. But all of those are well understood by now.
What I find baffling is that this "standard" does not specify how to encode the fields or even the message type. Pick anything, binary, ASCII, BCD, EBCDIC. That doesn't work as a standard, every implementation could send you nearly any random set of bytes with no way for the receiver to make sense of them.
> Many low level things are basically packed C-structs with this type of protocol.
Not really: C structs notably don't have variable-length fields, but ISO 8583 very much does.
To add insult to injury, it does not offer a standard way to determine field lengths. This means that in order to ignore a given field, you'll need to be able to parse it (at least at the highest level)!
Even ASN.1, not exactly the easiest format to deal with, is one step up from that (in a TLV structure, you can always skip unknown types by just skipping "length" bytes).
> Not really: C structs notably don't have variable-length fields
Feast your eyes: C99 introduced an ~~abomination~~ feature called flexible array members (FAM), which allows the last member of a struct to be a variable length array.
If you want to ~~gouge you eyes~~ learn more, see section 6.7.2.1.16 [0].
> To add insult to injury, it does not offer a standard way to determine field lengths
That's awful. You can sort of say the same about variable length struts in C, but at least the strict tupe definition usually has a field that tell you the length of the variable length array at the end.
[0] https://rgambord.github.io/c99-doc/sections/6/7/2/1/index.ht...
> [...] feature called flexible array members (FAM), which allows the last member of a struct to be a variable length array.
Oh, ISO 8583 has these too!
Sometimes they're even combined with the "feature" described in the article where there's a variable number of fixed-length elements, except for the last element, which is a variable-length string (or sometimes the last field type repeated n times). That's always "fun" to work with.
ISO 8583 really is a living museum of all ideas people had about binary encoding in the last half century or so.
FAM was a (not so successful) attempt to standardise some existing usage
As far as I'm concerned, we solved binary formats with ASN.1 and its various encodings. Everything afterwards has been NIH, ignorance, and square wheel reinvention.
ASN.1 DER, BER, or OER? Implicit and optional can really break compat in surprising ways. Then there are the machine unfriendly roster of available types. XDR was more tuned for that.
Finally free tooling doesn't really exist. The connection to the OSI model also didn't help.
> ASN.1 DER, BER, or OER?
Or XER or JER! One of the brilliant things about ASN.1 is that it decouples the data model from the serialization format. Of the major successor systems, only protobuf does something similar, and the text proto format barely counts.
> Implicit and optional can really break compat in surprising ways
Any implementation of any spec can be broken. You could argue that the spec should be simpler and require, e.g., explicit tagging everywhere, like protobuf. Sure. But the complexity enables efficiencies, and it's sometimes worth making a foundational library a bit more complex to enable simplifications and optimizations throughout the ecosystem.
> Then there are the machine unfriendly roster of available type
Protobuf's variable-length integers are machine-friendly now? :-) We can always come up with better encoding rules without changing the fundamental data structures.
> Finally free tooling doesn't really exist.
What do you mean? You use ASN.1 to talk to every server talking SNMP, LDAP, or the X.509 PKI. Every programming environment has a way to talk about ASN.1.
> The connection to the OSI model also didn't help.
Agreed. The legacy string types aren't great either. You can, of course, do ASN.1 better. No technology is perfect. But what we don't need, IMHO, is more investment in "simple" technologies like varlink that end up being too simple and shunting complexity and schema-ness that belongs in a universal foundation layer into every single application using the new "simple" thing.
> ASN.1 DER, BER, or OER? Or XER or JER!
My opinion is that DER is better. (However, DER is a restricted form of BER; any DER file is also a valid BER file, but has certain requirements of the encoding, so that it is a canonical form (the other canonical form is CER, but my opinion is DER is better).)
> Every programming environment has a way to talk about ASN.1.
Not all implementations are well-designed, though; I can see many implementations of ASN.1 that are not well-designed. I made up my own, to hope to be better, but we will see what is (hopefully) better.
> But what we don't need, IMHO, is more investment in "simple" technologies like varlink that end up being too simple
I agree with this, and it is important. This is my intention when I was designing stuff, to not be too simple nor too complicated; most stuff I see tends to be either too complicated or too simple, so I try to make stuff better than that.
There are zero free ASN.1 compilers or module checkers.
> There are zero free ASN.1 compilers or module checkers.
I must be misunderstanding what you're saying, because this exists: <https://www.erlang.org/doc/apps/asn1/asn1ct.html#>
From the linked page:
> asn1ct
> ASN.1 compiler and compile-time support functions
> The ASN.1 compiler takes an ASN.1 module as input and generates a corresponding Erlang module, which can encode and decode the specified data types. Alternatively, the compiler takes a specification module specifying all input modules, and generates a module with encode/decode functions.
XML also decouples the data model and serialization with the XML Infoset specification.
I think ASN.1 is good but there are some problems with it. I think that it should not need separate type numbers for the different ASCII-based string types and separate type numbers for the different ISO-2022-based string types; you can use one number for ASCII and one number for ISO-2022; the restrictions will be a part of the schema and should not be a part of the BER/DER. Furthermore, I think they have too many date/time types. Also, some details of the other types (e.g. the real numbers type) are more messy than they should be if they are designed better.
I had made up the "ASN.1X", which includes some additional types such as: key/value list, TRON string, PC string, BCD string, Morse string, reference, out-of-band; and deprecates some types (such as OID-IRI and some of the date/time types; the many different ASCII-based and ISO-2022-based types are kept because a schema might have different uses for them in a SEQUENCE OF or SET OF or a structure with optional fields (even though, if I was designing it from the start, I would have not had many different types like that)), and adds a few further restrictions (e.g. it must be possible to determine the presence or absence of optional fields without looking ahead), as well as some schema types (e.g. OBJECT IDENTIFIER RELATIVE TO). (X.509 does not violate these restrictions, as far as I know.)
I also have idea relating to a new OID arc that will not require registration (there are already some, but this idea has some differences in its working including a better structure with the working of OID); I can make (and had partially made) the document of the initial proposal of how it could work, but it should need to be managed by ITU or ISO. (These are based on timestamps and various kind of other identifiers, that may already be registered at a specific time, even if they are not permanent the OIDs will be permanent due to the timestamps. It also includes some features such as automatic delegation for some types.)
There are different serializations formats of ASN.1 data; I think DER is best and that CER, JER, etc are no good. I also invented a text-based format, which can be converted to DER (it is not really meant to be used in other programs, since it is more complicated than parsing DER, so using a separate program to convert to DER will be better in order to avoid adding such a complexity into programs that do not need them), and I wrote a program that implements that.
A bitmap to define field presence doesn’t seem so offensive, as far as serialization formats go. FlatBuffers[1] use a list of offsets instead, but that’s in the context of expecting to see many identically-shaped records. One could argue that Cap’n Proto with zero-packing[2] amounts to the same thing if you squint, just with the bitmap smeared all over the message.
I mean, this specific thing sounds like it should have been a fatter but much more unexciting TLV affair instead. But given it’s from 1987, it’d probably have ended up as ASN.1 BER in that case (ETA: ah, and for extensions, it mostly did, what fun), instead of a simpler design like Protobufs or MessagePack/CBOR, so maybe the bitmaps are a blessing in disguise.
[1] https://google.github.io/flatbuffers/flatbuffers_internals.h...
[2] https://capnproto.org/encoding.html#packing
I'd trade the field layer of ISO 8583 for some ASN.1 any day!
Luckily, there's a bit of everything in the archeological site that is ISO 8583, and field 55 (where EMV chip data goes, and EMV itself is quite ASN.1-heavy, presumably for historical reasons) and some others in fact contain something very close to it :)
Telegram's "TL" serialization, that's part of its network protocol, also uses a bitmap for optional fields. It's an okay protocol overall. The only problem is that the official documentation[1] was written by Nikolay Durov who is a mathematician. He just loves to overgeneralize everything to a ridiculous degree and spend two screens worth of outstandingly obtuse text to say what amounts to, for example, "some types have type IDs before them and some don't because the type is obvious form the schema".
[1] https://core.telegram.org/mtproto
Something similar is TLV which is extremely common in binary network protocols, because it's very flexible for compatibility.
A lot of payments chatter on here recently and patio11 throwing out some great content as well. May I ask where this pretty visual explanation website was 25 years ago? ;) Oh the woes of programming ISO8583 as I see another commented on EBCDIC which adds in a whole other level of mind numbing when passing between the endians. It was a fun experience however back in the early 2000s when I worked in isolation with Discover card to get the GUID field added to the ISO8583 specification.
We are living in changing times on many fronts and the worlds financial systems is one of those new battlefields. Many are ignorant as to what is occurring but with big tech owning their own payments ecosystems this should be insight for others not aware as we are absolutely certain to see more following their lead. Some of those others following are entire countries, they are just a bigger business after all, as it is already happening for those aware and a small select few are doing i.t.
Stay Healthy!
Painful memories of when you tell brands your data is coming in as ASCII and they have it as EBCDIC.
I referred to it as "the woes" but yes I agree with your choice of words as well. As an experienced technology builder I generalize the struggles I vividly recall and it often involves parties not doing the real work and therefore they lack the comprehension of the details that are all that truly matters. This applies to a lot of things today, tech included, where most have no idea how anything works and the solution typically boils down to, did you power cycle it?
I learned a lot more about this discussing the PCI/DSS [0] regulation framework here [1]. It's about to change to a new 4.0 in 2025 which means that to use or run any payments system you'll have to meet ever more stringent regulation. This is going to start applying to other pseudo currencies (in game value tokens etc) if they exceed certain value and scale. At present Visa and Mastercard have a big stake in defining this (capturing the regulator).
Interestingly local real (non-digital) currencies like the Brixton Pound [2] and other local paper scrip seem to escape this, which seems a boost for paper technologies.
[0] https://en.wikipedia.org/wiki/Payment_Card_Industry_Data_Sec...
[1] https://cybershow.uk/episodes.php?id=36
[2] https://brixtonpound.org/
PCI-DSS is an industry standard, not a law. If you don't think it should apply to your domain, complain to your legislators/regulators, not the authors of PCI-DSS or the payment industry covered by it!
> Interestingly local real (non-digital) currencies like the Brixton Pound [2] and other local paper scrip seem to escape this
And so do countless other digital (non-real?) payment systems across the globe. That's not to say that there aren't any other security regulations, but they're also most certainly not in PCI scope.
Arguably, the original sin of the card payments industry in particular, and US American banking in general, is treating account numbers as bearer tokens, i.e. secret information; if you don't do that, it turns out that a lot of things become much easier when it comes to security. (The industry has successfully transitioned of that way of doing things for card-present payments, but for card-absent, i.e. online, card transactions, the efforts weren't nearly as successful yet.)
There is some confusion in that comment.
- PCI DSS 4.0 is already in place and to be retired on December 31, 2024. PCI DSS 4.0.1 is the replacement and I place already.
- PCI DSS 4.0.1 and game tokens have nothing in common. The applicability of PCI DSS requirements are decided by card brands, aka Visa, Mastercard, etc. And it is the acquirers to enforce on the third party service providers to enforce the standard. Standard itself has no power on anyone.
- Mastercard and Visa have high stakes because technically they are the regulators. EMV Co, the core of the payments was built by Europay (later acquired by Mastercard), Mastercard and Visa. The M and V of it are managing the chip on cards, online payments and much more. PCI SSC is merely a supervisory authority who sets the standard, the process of assessments and investigations on behalf of these brands.
Side note: While the other card brands accept PCI DSS as an entry level requirement, they do not have as much saying on it as Mastercard and Visa.
* in place
I wonder if this is the standard that drove Charles Stross slightly insane and led to Accelerando.
https://www.antipope.org/charlie/blog-static/fiction/acceler...
Actually based on the timing, this is probably the new better standard that replaced the obscure protocols of the 70s.
Oh, this format was fun. You could see history unfold when parsing it. The messages I parsed were ISO-8583 with ~EBCDIC~ no, BCD. But one field contained XML. And the XML had an embedded JSON. The inner format matched the fashion trend of the year when someone had to extend the message with extra data. :-)
> The messages I parsed were ISO-8583 with ~EBCDIC~ no, BCD.
The "great" thing about most ISO 8583 implementations I've worked with (all mutually incompatible at the basic syntactic level!) is that they usually freely mix EBCDIC, ASCII, BCD, UTF-8, and hexadecimal encoding across fields.
Fascinating, I don't think I've ever seen an XML field! Do you remember which network that was for?
We were the issuer. So these were probably the payment processor's extensions. But we were issuing MasterCards.
It has been fun seeing all the different ways companies have come up with to work around the limitations of ISO 8583. One I’ve been seeing a lot lately is making an API call before/after the ISO message (with non-PCI data) to confer additional information outside of the payment transaction. Definitely speeds up time to market, but opens up a whole new array of failure modes to deal with.
I got my last company certified with Visa and Mastercard for authorization and clearing. It's funny how they call it a standard but it's anything but that. There were some similarities but a lot of subtle differences which made the process 10X harder. Mastercard was the worst to deal with.
Unlike Visa and Mastercard, I noticed that AMEX transaction notifications are near-instantaneous. There is something so magical about a notification popping up on my phone/watch literally the second i swipe a card. I always wondered about the layers on the stack which V/MC must have which AMEX doesnt.
> Unlike Visa and Mastercard, I noticed that AMEX transaction notifications are near-instantaneous.
No idea where you bank, but my Visa notifications are instantaneous, so the network is clearly capable of that. I'm with a modern European bank though, I wouldn't be surprised if the mainframe-lowing US banks that do everything via overnight batch jobs are incapable of this.
With that said, there are places which straight up won't send your transaction to the network at purchasing time. Apple[1] is one notable example. They seem to have a cron job that runs at midnight and does billing for that day. This is really annoying if you're buying an expensive Apple product and increase your card limits for one day only.
I've even seen places that do extremely-low-value transactions "on faith" - the transaction is entirely offline, it gets send to the network the next day, and if it is rejected, your card number goes on a blacklist until you visit the company's office in person and settle the debt.
> I'm with a modern European bank though, I wouldn't be surprised if the mainframe-lowing US banks that do everything via overnight batch jobs are incapable of this.
I wouldn't be surprised if your European bank still relies quite heavily on its mainframes. The mainframe offers high availability, reliability, and, contrary to popular belief, high throughput. Batch processing is just one thing they're good at; real-time processing at high speed is another.
You missed the footnote for [1].
Did you meant Apple Card, Apple Pay or the Apple Store?
It's probably less about layers and more about the different number of stakeholders.
Visa/MC transactions go through at least four systems (merchant, merchant payment service provider/acquirer, Visa/MC, issuer processor); Amex is merchant acquirer and card issuer in one, so there is no network and accordingly at most three parties involved (merchant, merchant PSP, Amex).
That said, some of my Visa/MC cards have very snappy notifications too. In principle, the issuer or issuer processor will know about the transaction outcome even before the merchant (they're the one responding, after all), and I definitely have some cards where I get the notification at the same time and sometimes before the POS displays "approved".
As others say, it's not a matter of Visa VS Amex. I use both a Mastercard and a Visa with a neobank in Europe, and I get instant notifications. Must be more something to do with the bank (US banking is famously so behind, but I also see days-long delays with traditional european banks).
Even more magical: when sending money to someone I'm physically present with, I hear their notification before the "money sent" animation finished on my own phone
Unlike Visa and Mastercard, I noticed that AMEX transaction notifications are near-instantaneous. There is something so magical about a notification popping up on my phone/watch literally the second i swipe a card. I always wondered about the layers on the stack which V/MC must have which AMEX doesnt.
Must be your bank, because both my Visa and MasterCard ping my phone instantaneously, too.
Some smaller banks upload available balances to processors and perform clearing later in a back office only. They just don't have a hook to link a notification and send it only after the actual settlement.
>> Must be your bank, because both my Visa and MasterCard ping my phone instantaneously, too.
Well thats sort of the thing...with Visa and MC, there is an extra layer or two of the bank or Fidelity Information Services. With Amex, they own the full stack end to end.
Your card limit gets checked on every transaction. There doesn't seem to be a technical reason why information flow back to me should be limited in any way. If the extra layer fails to work the transaction fails to pass.
> Your card limit gets checked on every transaction.
Nope. The merchant can choose the level of verification - in some cases, like copying the card with an imprinter [1] or running phone transactions (yes, that is possible - it's called MOTO [2]), it's obviously impossible to check card limits.
Downside of CNP transactions is, the merchant is fully liable for anything from fraud over chargebacks to exceeding limits.
And then you got card-present transactions but the network connectivity is down for whatever reasons... been a while since I messed with that, but at least for German cards you could configure the terminal to store the account details for later submission when connectivity was restored.
[1] https://en.wikipedia.org/wiki/Credit_card_imprinter
[2] https://docs.adyen.com/point-of-sale/mail-and-telephone-orde...
I remember being charged after a while when paying for bus/metro tickets in some places, I think those machines process transactions by batches or something.
I have a visa card with a Canadian bank and get transaction messages within 5 seconds of payment usually. Maybe it is a per bank thing?
It's a per country thing. Card txns in the US are bananas arcane byzantine nightmares. (Source: worked at Canadas largest bank on txn processing software).
I also get transaction notifications at a similar speed in the UK, in pretty much all of my bank accounts.
AMEX is the bank. For Visa/Mastercard, the latency is probably due to the bank they have to route the transaction through.
> "ISO 8583: The language of credit cards"
"ISO 8583: The language of both debit and credit cards"
And sometimes even bank transfers (I believe at least FPS in the UK used it, or possibly still does).
Also don't forget about prepaid cards, charge cards etc., depending on where they exist in your personal ontology of card funding methods ;)
We’ve had a lot of success with our Go library for iso8583
https://github.com/moov-io/iso8583
Great article that exposes why ISO20022 will replace 8583 over time, especially in areas not dominated by the M/V monopoly networks.
Credit cards, with all their nonsense about cash backs and rewards can be imnplemented in the new payment systems with banks offering line of credit accounts that are part of the appropriate "national payment system", like UPI, PromptPay, Osko/PayID, FedNow etc.
Instant settlement between accounts, low cost fixed price txns etc.
Fun anecdote: Thailand's entire banking network (including regular wire transfer) was implemented with ISO 8583 (!). Part of the AnyID master plan (later renamed to PromptPay) was to replace the country's (ab)usage of ISO 8583 with ISO 20022. The Ministry of Finance hired a UK-based Vocalink to build this converter, among with other systems MoF hired them for. (AFAIK, the entire stack was written in Erlang.)
An interesting side effect of this low-level bit mapping is that various banks authorization logics can be manipulated to increase auth rates by subtle bit flipping across various fields.
All the big fintech companies have ML running over changes to identify what results in the highest auth rates on a per bin basis.
I would be very surprised if bit flipping and ML were really used here, do you have any source?
While for sure there's a lot of signal and value in monitoring auth rates per BIN per payload, flipping bits can be extremely disruptive and counterproductive. From doing the wrong operation to being fined by the schemes, it's a lot of risk for not a lot of gain when these fields can be tuned ad-hoc for the few card issuers that deviate from the standard/norm.
So is ISO20022 the future then? There should* be a standard system that all the networks stick to... instead of the hodgepodge of bullshit there is now.
I really wonder what a future would look like without companies like Visa/Mastercard/Discover/AMEX.
Fun times reviewing the masking logic of credit card data spewed out in system logs, in base64-encoded (or god forbid, EBCDIC-encoded base64-encoded) ISO 8583.
(In the holiday spirit)
The only language of credit cards is points, cashback, APYs, and hard to read TOS