top button
Flag Notify
    Connect to us
      Site Registration

Site Registration

How do I use a Single Mapping to Handle Multiple Files with Different but Similar Formats in Informatica PowerCenter?

0 votes
1,197 views

I have some files that I would like to consolidate into a single database table. The files have similar but different formats. The files look something like this:

FileOne:
•ColA : string
•ColB : string
•ColC : string

FileTwo:
•ColAA : string
•ColBB : string
•ColCC : string

FileThree:
•Col01 : string
•Col02 : string
•Col03 : string

The destination table looks like this:

TableDestination:
•ColFirst : string
•ColSecond : string
•ColThird : string

I want to develop a mapping that ETLs these three files into this one database, but because the column names are different, it looks like I'll have to develop three different mappings, or three different sources, or three different somethings. The problem is that my example is contrived: I actually have many different files that all have different formats and column names, but the data is all very similar.

I would like to develop a single mapping or workflow that can handle all of this by only adding a table that holds the column mappings. Such a table would look like this based on the sample files and sample table above:

TableMappings:
enter image description here

In this way, to edit a column mapping I only have to make an edit this this TableMappings table. I wouldn't have to make any changes at all to the mapping or workflow. Nor would I have to redeploy an application.

What would a mapping or workflow look like that could take advantage of something like this? I assume there'd be a flat file source that takes files from a folder. There would be something in the middle that uses this TableMappings table to map column names. Finally there would be a relational data object that represents my destination database table "TableDestination". I don't know how to put this together though.

posted Jan 30, 2015 by Amit Sharma

Share this question
Facebook Share Button Twitter Share Button LinkedIn Share Button

1 Answer

0 votes

With Flat Files as source, column names are not important. It doesn't even matter if the column count is matching. If the actual file will have more columns then Source Definition, only the first n columns will be read (with n being the number of ports in Source Definition). In the opposite situation, the extra ports will contain null values.

Having said that, loading multiple flat files is easy.

The problem would be if the column order is different and you want this additional static table that would define the column mapping. This is doable, i.e. Java Transformation can be used to do the column mapping. But the whole solution is too complex for me to describe it here. I can try to answewr some precise, specific quesitons - but I'm not able to prepare and paste here the complete solution.

answer Feb 4, 2015 by Shweta Singh
Similar Questions
0 votes

I have a directory of files where each file is in one of three file formats. We can use the filename to determine the file format for mapping data. I would like to loop through all the files in the directory and based on the filename direct it towards the associated session mapping, delete the file on success, update the control table and then move to the next file. I'd like the execution to be deterministic. What are some of my options?

0 votes

Can you use a target table in multiple areas of a single mapping.

I have routers around the table, where I know some rows are ready, and currently they are all going to a final Union transformation.

I am wondering if it is possible to direct these from the router, directly into the multiple instance of a single target table.

If so, how does Informatica do the inserts?

Will it each time queue the outputs into the same target, or try and insert them concurrently when and as it stumbles upon them?

0 votes

Suppose if we have duplicate records in a table temp_n. Now I want to pass unique values to t1 and duplicate values to t2 in single mapping? How can we achieve?

+2 votes

In my mapping it tooks long time to fetch rows to the target table but it reads those rows from .CSV file within a minute.In this mapping One source is a flat file another source is a table which is having data from target.Here we create a logic to UPDATE or DELETE or INSERT in target based on comparison on the data from flat file and the source table(data from target table).while seeing sesion log it reads data from flat file within a minute but it fetches those data to the target to taget 9 rows/sec throu enter image description hereghput.Here the target table is created using unix script ie CREATE TABLE STG_LM_INSTITUTION as (SELECT * FROM LM_INSTITUTION);Target in this mapping is STG_LM_INSTITION.this is the copy of final target(LM_INSTITUTION).I think the problem is due to creation of the taget table in script but i'm not sure.Anyone please help me to solve this issue.Source flat file has 2L rows. I run this mapping with 2L rows.After 11hrs it fetches only 1L records into the target.But while running using 500rws only it fetches records in one minute to the target .

+3 votes

How do I access a mapping parameter ($$myvariable) from a Java Transformation in Informatica Powercenter?

What I want to do is to make a Java transformation reusable by making a part of it configurable, and a variable seemed suitable for that, however I haven't been able to access (read) a variable from the Java code.

...