Showing posts with label hyperion planning. Show all posts
Showing posts with label hyperion planning. Show all posts

Tuesday, 2 July 2019

KScope19 – Deep-Dive into Data Integration



There were a lot of big announcements about the future direction data management is taking, and except for the proposed loss of our favourite FDMEE fish, they were all absolutely fantastic, and there is so much to be excited about. I’m going to give my views on the biggest announcements and the direction Data Integration is taking, and I’ll be getting some tutorials together once these features are available. But first:

Jargon Buster

  • FDMEE – on-premise Financial Data Management Extended Edition, amazing ETL tool for mappings, data import and export.
  • Data Management – the cloud version of FDMEE, built into PBCS etc
  • Data Integration – the shiny, fishless facelift of Data Management

Data Integration

Be ready, the new world is coming, and eventually it will replace the Data Management we all know and love. Data Integration functions the same as Data Management in the background, but it’s been designed to be a lot more user friendly. For example, rather than creating a source, target, then location, then import format, then load rule you will in the future just create an integration which encompasses most of those steps in one place.

They’ve applied the same logic to the options, which currently are spread all over the place but will now be accessible in one window. There are also some great new features coming with the facelift, such as Target Expressions which will allow you to apply several mappings at the load stage, including some mappings that were much more complicated to perform before. Target Expressions are complicated enough to deserve their own tutorial blog, so I’ll dive into those later, but another great feature is an official processing order for data maps, so no more alphabetical processing of data maps, you can get a like mapping to process first and then have explicit maps after if you want. This is a big Quality-Of-Life (QOL) update for admins and really shows Oracle are listening to feedback when it comes to data integrations.

You can also choose to skip the workbench stage, which can massively speed up integrations for day-to-day running if drill-through isn’t required, although you can then switch it back on and re-run an integration to troubleshoot any weirdness. I can see customers using this feature a lot. This is coming in version 19.05 for cloud.

You can start using Data Integration right now in PBCS, although full parity with Data Management doesn’t exist yet. As a result, you can create an integration in Data Integration and then jump into the more familiar Data Management interface to see how it all works in the background, and you can run an integration in either interface with no issues.

Data Export Updates

A recent highlight update for Data Management was the massive upgrade to flat file exports, which has given us several great options. These include:

  • Import a file template to create a flat file export
  • Include/Exclude the file header
  • Sort the data in a file export
  • Attributes can be exported into flat files
  • Easy reordering of output columns
  • Pivot specific dimensions into the columns
  • Choosing the accumulate (aggregate) data or not based on preference
  • Changing the data file parameter (we love a | in our csv files)
John Goodwin has blogged about it, so you know it’s great (check that out here) and there’s no point covering the same ground, so let’s move on!

Future Integrations to follow NetSuite to PBCS model

For those that haven’t read me gushing about the NetSuite integration, check my tutorial out here, but Mike Casey has confirmed that future direct integrations are planned to follow the NetSuite model. This means that they will support designing a query in the source system, pulling that specific query over to Data Management and mapping that query individually from other queries. This is AMAZING for importing metadata vs importing data which usually need completely different queries, all in the cloud, all on demand, all SIMPLY DELIGHTFUL

On-Premises Agent

The new on-premises agent will allow direct connections to on-premises systems, with a variety of configurable options to ensure all system admins can get on board with it. It will offer synchronous mode which is tougher to set up and requires more IT involvement but is constantly listening for requests, or asynchronous mode which can be run on a schedule and will execute any queued requests when it’s run on the server-side. Asynchronous doesn’t need a network port or any additional network infrastructure than EPM Automate needs now, so it will be less of a headache to set up.

Direct connections to on-prem databases from the cloud have been a massive request for a long time and it’s going to be a game-changer when this delightful tool comes out. It sounds seriously powerful!

Other Roadmap

Below is the roadmap which was shown at KScope19, under the usual Oracle Safe Harbour statement, so this represents their planned direction but no official commitment to release this stuff. Regardless, there’s plenty to keep guys like me excited and busy in here!


There was so much new and exciting content in the Data Management roadmap, that I haven’t even got around to discussing the direct HCM Cloud integration, including write-back, or the fact that data integration has been upgraded to allow a 5 million row limit!

Expect plenty of blogs giving blow-by-blow walkthroughs of the above once they’re released, but until next time, adios!

Mike

Monday, 1 July 2019

KScope19 – Deep-Dive into the future of Planning

PBCS as we know it is changing, with some seriously exciting opportunities in the new world of Enterprise EPM Cloud, or the more familiar world of NetSuite PBCS. There are also new pre-built applications in the works aimed at strategic workforce planning and sales planning, which aims to give the great Hyperion Planning functionality to new areas of your company. 

Below is a comparison from Oracle between the various flavours of traditional Planning, including the exciting Free-Form and the capability for up to 6 BSO cubes in a single application using the new Enterprise SKU.



Hybrid BSO

Hybrid BSO is available right now for EPBCS customers – by creating an SR you can request that this is turned on for you. This effectively gives you the power of BSO calculations at level-0 with the aggregations of ASO, leveraging the best of both worlds! Once you’ve turned it on, you apply it by creating sparse dynamic parents, and you will need to tune it to some extent (like how ASO uses aggregate views to pre-aggregate some data). PBCS can’t get this feature yet, sadly!

Free-Form Planning

Free Form Planning functions like Essbase, but in the Cloud and with all the extra bells and whistles of Planning. Currently each free-form application can only support one cube, but since you’re only likely to use it with the new Enterprise SKU which gives you unlimited apps, you can really take advantage of this new tool.
Free Form means that there is no requirement on dimensionality whatsoever, and therefore you can create all sorts of wacky models for very specific parts of your business, and then use data management tools to migrate into your forecast or budget.

Smart View for Office 365 Browser and Mac

Smart View can now be deployed centrally to your web browser, which is a massive improvement. It paves the way to a much better process for quicker plan updates – saving forms in Excel workbooks on SharePoint, so your users can click a link in an email to open their form in the browser, in Excel rather than the PBCS online wizard. There are great online resources to walk you through this process here, or you can wait for my blog post after I’ve implemented it internally for us! 😊

IFRS16 Full Support in Capex Module

This was big news for me, as I recently developed a custom IFRS16 application in PBCS. Pretty soon full IFRS16 functionality will be available by default in the capex module, although it remains to be seen how configurable it will be.

Groovy dude

Groovy is going to the big time, as more and more customers move to Enterprise. I’m personally going to start migrating my EPM Automate scripts to start using Groovy instead, and business rules can also include Groovy in Enterprise Cloud or EPBCS, which gives you a lot more power and control over your business rules. The best example, courtesy of Kyle Goodfriend of in2Hyperion (who has put an immense amount of work into creating publically available Groovy tutorials) is to use Groovy to only run aggregations for specific cells on a form that have changed, which massively reduces the calculation overhead. Using Groovy gives you the power to loop through these cells and create FIX statements on the fly, which is extremely powerful.

There’s loads more that Groovy can do, and you can look forward to more Groovy functions becoming available as it’s on the strategic roadmap.

Cloud Features

A quick note – go to this link to access Oracle’s new tool for announcing and searching new features – it’s fantastic and has great links to the documentation, so that you can find out about key new features without having to wait for us to blog about it.

The future of Planning is bright, and I’m looking forward to it!

Until next time,

Mike



Wednesday, 14 June 2017

FDMEE Automation – Automatically Identifying the process ID



Welcome to the third part of my series on automatically loading data from source systems into PBCS. I promised tips and tricks, so here is another neat way of identifying what can often be a useful variable in your data load, the process ID.

Firstly, what is the process ID? It’s a unique identifier assigned to each FDMEE process that you execute. It’s easy to spot when you check your process details window, but not quite so easy to identify when you're running an automated script.



Why is this number important, I hear you cry? Well, for reasons best known to themselves, there are a few vital operations within FDMEE whose outputs contain the current process ID. A couple of these are crucial – exporting data from PBCS using FDMEE (check my colleague Jaz’s blog out for a full guide coming soon!) and also producing reports in FDMEE as I’ve blogged about here.

I’ve already covered the second one, so what about when you’re exporting data from PBCS using FDMEE. This is an awesome way to move data between cubes when you need to do complex mapping, the inbuilt data maps functionality doesn’t quite cut the mustard for anything more complex than “map everything to No Entity”.

So, the output from exporting data from PBCS unfortunately comes out in this format: Target_Application_ProcessID.dat. An example would be Workflow_Cube_1393.dat using the process ID from above.

Now, if you wanted to automate pulling data from one cube to another, you would need to be able to identify that filename. Thankfully, the Target Application name is constant and so is the .dat extension, but you need to identify the process ID. But how can we do this? 

The answer is to reuse a cheap trick I’ve blogged about before – FDMEE reports are always at the bottom of a listfiles extract. So, if we were to run a dummy FDMEE report directly after our export from PBCS then it can logically be deduced that the process ID of our report will be one greater than the process ID of our export. And we can find out the report process ID using the instructions here.

So, the process would be:

  1. Run your FDMEE Export from PBCS rule (rundatarule)
  2. Immediately run a FDMEE dummy report (the report itself doesn’t matter, we just need the process ID) (runDMReport)
  3. Using my previous blog find the process ID and take off 1 (set /a processID =%ID% - 1 )
  4. Run your standard FDMEE load rule with the data file Target_Application_%ProcessID.dat% as the input file(rundatarule again)3)     
And boom! You’ve just loaded data from one cube to another, in a ridiculously roundabout way. But, if you have the requirement to map several dimensions into one for reporting, this is the most elegant way I can piece together to achieve it.

Here’s the setup in action – notice particularly the name of the file being used for process 1326 – you don’t even have to download the data file, just keep it saved in the cloud and point your rule at outbox/TestApp_%processID% in your batch file to achieve the below.



Again, please check this fantastic blog from my colleague to see the missing piece of the puzzle, the export from PBCS via FDMEE (step 1324 in the screenshot above).

Stay tuned to my blog for the final instalment on FDMEE next week, setting up drill-through to Microsoft Dynamics CRM!

Cheers,

Mike