A plain-English guide to the end-to-end pricing process — what happens, why it's painful, and why we're rethinking it.
Every month, SSE needs to figure out how much to charge customers for natural gas across multiple states — Georgia, Illinois, Ohio, Pennsylvania, and Michigan. That sounds simple, but it's not. The price customers pay is built from layers of inputs: the raw commodity price on the market, the cost to transport it, local fees and tariffs, weather-related adjustments, and business margin decisions.
The "Development of Pricing Process" is the entire journey from "what did natural gas cost on the open market today?" all the way to "this is the rate showing up on a customer's bill."
The outputs of each step become the inputs for the next. If something is wrong in Step A, every subsequent step inherits that error. If someone makes a typo in a spreadsheet in Step B, it could flow all the way through to a customer's bill in Step F.
What's happening: Every day, the Risk Management team pulls natural gas market prices from two external sources — Platts (a pricing service) and ICE (an exchange). These prices arrive as data files via FTP (basically, automated file transfers). The team then loads those prices into Openlink, the company's trading and risk management system. There's also a monthly process where they download city gate index prices from a website and load those into Openlink too.
In addition, they handle volatility uploads — these are options pricing data that also gets pulled from files and loaded into the system. All of this establishes the "base price" that everything else builds on.
What's happening: The base commodity price (NYMEX) is only part of what a customer pays. On top of that, there are transportation costs, pipeline tariffs, local delivery charges, fuel costs, weather adjustments, and various fees. Different teams compile these "cost adders" for each market — Georgia gets its own model, Illinois gets its own, and the expanded markets (OH, PA, MI) get theirs.
People from Business Development gather tariff data from pipeline company websites (Sonat, Transco). Physical Operations provides basis differentials and fuel data. FP&A compiles load shape and weather data. All of these inputs funnel into a cost adder model for each market.
What's happening: Once we know the monthly settlement prices (from Step A), the FP&A team calculates "weighted average strips." In plain English, this means: "Based on what gas costs over the next several months and how much gas each customer is expected to use, what's the blended average price we should plan around?"
A separate team called Quantitative Analytics provides usage-per-customer data (how much gas each customer type typically uses). This data changes only about twice a year. The FP&A team plugs it into their pricing models to weight the price curves appropriately.
What's happening: This is the most complex step. The Data Analytics team pulls together everything — the cost adders from Step B, the weighted average strips from Step C, the market data from Step A — and builds gross margin analysis files for each state. These files answer the question: "If we charge this price, what's our margin after all costs?"
From there, recommended prices are assembled into a Final Pricing Deck. That deck goes through multiple approval stages: a Directors meeting reviews it, a Hedge Committee approves it, and for some states, the approved prices get submitted to regulatory authorities. Prices also get published to market websites and portals where customers can see available rates.
This step also handles B2B pricing (rates for business customers) and manages the back-and-forth with Marketing on rate code validation.
What's happening: The Marketing team receives the gross margin analysis models from Data Analytics and uses them — along with competitor pricing and market intelligence — to decide the actual prices customers will see. They're not just rubber-stamping the numbers; they're making strategic pricing decisions.
For Georgia, Marketing compiles cost adders, NYMEX strips, and budgeted margins into one consolidated view, reviews competitors, and determines pricing. For Illinois and expanded markets, they review separate models that arrive on different dates throughout the month (PA on the 15th/16th, MI on the 20th/21st, etc.).
Marketing also generates promo codes — special pricing offers for customers — for both Georgia and Illinois.
What's happening: After everything is approved, someone has to actually put the prices into the systems that generate customer bills. This is trickier than it sounds because there are multiple billing systems:
Prime handles Illinois and Expanded Markets (OH, PA, MI). Safari handles B2B rates for Georgia and FNG. Banner handles Georgia GNG retail rates, and it's managed by a third-party vendor called Vertex.
For each system, the team first uploads prices to a test environment, validates that everything looks correct, generates approval reports, gets sign-off from Marketing and FP&A, and only then pushes to production (the live system). There are three formal SOX compliance controls here (RE81, RE82, RE83) — meaning these steps are legally required for financial controls.
There's also a quarterly tax rate update process where Georgia tax rates are reviewed, verified by the Tax Team, approved by directors, and updated in Banner through Vertex.
With 30+ unprotected spreadsheets, a single accidental edit could flow incorrect prices all the way to customer bills. The current process relies on people catching errors, not systems preventing them.
The end-to-end process takes most of the month. Prices arrive at 6pm, analysts manually copy-paste through multiple files, approvals happen via email, and uploads take days. There's very little room for error correction.
Three markets run nearly identical workflows with separate spreadsheets. The same logic is maintained three times. When a process improvement is made, it has to be replicated across all three — and often isn't.
The goal isn't to scrap everything overnight. It's to look at this process with fresh eyes and ask: where are we doing work that a system could do for us? Where are we passing data by email that could flow automatically? Where are we checking things manually that could be validated by a rule?
The bot that already exists in Step D proves this can work. It compiles data, validates it, and emails the results — automatically. The question is: what would this process look like if that approach were applied more broadly?