top button
Flag Notify
    Connect to us
      Site Registration

Site Registration

SQL Query in Informatica Cloud

0 votes
878 views

am trying to integrate a MySQL database with Salesforce but am having trouble with part of the query logic.

I enter this query:

SELECT occupation_name FROM skillsmatch_skillsprofile GROUP BY time_updated ORDER BY time_updated DESC LIMIT 0,5

But get this error:

<> missing operator... SELECT>>>> <<<

Any idea where I might be going wrong?

Thanks for your help!

posted Dec 30, 2014 by Amit Sharma

Share this question
Facebook Share Button Twitter Share Button LinkedIn Share Button

2 Answers

0 votes

there should be an aggregate column (like count, min(), sum() etc...) as part of the select list

answer Jan 2, 2015 by Shweta Singh
Check the column name "occupation_name". It should be with GROUP BY option as well.
0 votes

Check the column name "occupation_name". It should be with GROUP BY option as well.

answer Feb 5, 2018 by anonymous
Similar Questions
0 votes

I have a SQL transformation with 2 ports. I want to insert the value of these ports to a table, however I am getting an error from sqlError port.

Below is the query I am writing in the source qualifier:

INSERT INTO A values (~QC_CODE~,(~QUERY_STRING~));

The QUERY_STRING port contains a sql statement which is executed on Teradata and results have to be inserted in the table A.

If I replace the first port in the above query with a constant value, I get correct results. Below is the query that gives correct result:

INSERT INTO A values ('1',(~QUERY_STRING~));
+1 vote

We are looking for a good cloud integration tool. we have informatica cloud and SnapLogic as an option, but still looking for more information what key features differentiate these two tools? people are choosing SnapLogic over informatica these days. What are the main features that SnapLogic provides but not informatica or viceversa?

+3 votes

I wan't sure how to word this question so I'll try and explain. I have a third-party database on SQL Server 2005. I have another SQL Server 2008, which I want to "publish" some of the data in the third-party database too. This database I shall then use as the back-end for a portal and reporting services - it shall be the data warehouse.

On the destination server I want store the data in different table structures to that in the third-party db. Some tables I want to denormalize and there are lots of columns that aren't necessary. I'll also need to add additional fields to some of the tables which I'll need to update based on data stored in the same rows. For example, there are varchar fields that contain info I'll want to populate other columns with. All of this should cleanse the data and make it easier to report on.

I can write the query(s) to get all the info I want in a particular destination table. However, I want to be able to keep it up-to-date with the source on the other server. It doesn't have to be updated immediately (although that would be good) but I'd like for it be updated perhaps every 10 minutes. There are 100's of thousands of rows of data but the changes to the data and addition of new rows etc. isn't huge.

I've had a look around but I'm still not sure the best way to achieve this. As far as I can tell replication won't do what I need. I could manually write the t-sql to do the updates perhaps using the Merge statement and then schedule it as a job with sql server agent. I've also been having a look at SSIS and that looks to be geared at the ETL kind of thing.

I'm just not sure what to use to achieve this and I was hoping to get some advice on how one should go about doing this kind-of thing? Any suggestions would be greatly appreciated.

0 votes

What will happen if the SELECT list COLUMNS in the Custom override SQL Query and the OUTPUT PORTS order in SQ transformation do not match?

...