Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BQL Feature Enhancement Request #18

Closed
tmlynowski opened this issue Nov 1, 2022 · 10 comments
Closed

BQL Feature Enhancement Request #18

tmlynowski opened this issue Nov 1, 2022 · 10 comments
Labels
enhancement New feature or request

Comments

@tmlynowski
Copy link

Feature enhancement Request: Is it possible to implement BQL functionality same as Excel BQL Builder? Thanks

@matthewgilbert matthewgilbert added the enhancement New feature or request label Nov 5, 2022
@matthewgilbert
Copy link
Owner

matthewgilbert commented Nov 5, 2022

@tmlynowski could you spec out what this involves? Is this a feature of the underlying blpapi library? The Excel Bloomberg functionality and their python API are not identical.

@avantgardeam
Copy link
Contributor

@matthewgilbert BQL is indeed a functionality that can only be accessed via Excel (or BQNT). However, there seems to be a workaround that involves accessing Excel via Python to make BQL requests (https://stackoverflow.com/a/69014472). Do you think it would be possible to create a new class to implement this workaround?

@matthewgilbert
Copy link
Owner

blp is a wrapper for accessing blpapi via a more pythonic interface. What you linked has nothing to do with blpapi so I think this would be out of scope for this project.

@DS-London
Copy link

DS-London commented Jun 27, 2023

@matthewgilbert Rather like the //blp/exrsvc service for retrieving saved SRCH results (which is used in pdblp), there is an undocumented API service for BQL queries, //blp/bqlsvc. The sendQuery request takes a BQL string (the expression), and returns a single element response containing a JSON string, which can be unpacked into a DataFrame. It is not well advertised.

@avantgardeam
Copy link
Contributor

@matthewgilbert Rather like the //blp/exrsvc service for retrieving saved SRCH results (which is used in pdblp), there is an undocumented API service for BQL queries, //blp/bqlsvc. The sendQuery request takes a BQL string (the expression), and returns a single element response containing a JSON string, which can be unpacked into a DataFrame. It is not well advertised.

Is this feature available for the Desktop API or is it exclusive to BQNT?

@DS-London
Copy link

@avantgardeam It is in the API, accessible via blpapi.

@matthewgilbert
Copy link
Owner

Thanks for flagging @DS-London . In that case I’m happy to except a PR for this feature.

@avantgardeam
Copy link
Contributor

avantgardeam commented Jun 28, 2023

@DS-London, thank you for sharing this option!

@matthewgilbert, I have successfully made requests to //blp/bqlsvc. However, I've noticed that the behavior of this endpoint differs from the others. Take a look at the sample request and the beginning of its response:

query = {'sendQuery': {'expression': "get( px_last() ) for( 'AAPL BZ Equity' )"}}
response = bquery.query(query, parse=False, collector=list)

Response snippet:

[{'eventType': 5,
  'eventTypeName': 'blpapi.Event.RESPONSE',
  'messageNumber': 0,
  'message': {'fragmentType': 0,
   'correlationIds': [66],
   'messageType': 'result',
   'timeReceived': Timestamp('2023-06-28 04:45:19.509121+0000', tz='UTC'),
   'element': '{"results":{"px_last()":{"name":"px_last()","offsets":[0], ...'}]

The issue arises because BlpParser relies on list(response["message"]["element"].keys())[0] to represent the response type. However, in this case, response[0]["message"]["element"] is a string that lacks any information about the response type.

In this case, how should I proceed? Should I set parse to False in self.query() and utilize the collector for both parsing and collecting?

@DS-London
Copy link

@avantgardeam The response is a JSON string. You can parse the whole string by using json.loads(response[0]["message"]["element"]) and then pick out the results key. Something like this (untested with blp):

    from pandas import DataFrame,concat
    from json import loads
    
     ...

    jsonResults = loads(response[0]["message"]["element"])
    rslts = jsonResults['results']

    df = concat([ DataFrame(c['valuesColumn']['values'],
                                     index=c['idColumn']['values'],
                                     columns=[c['name']]) 
                           for c in [rslts[k] for k in rslts.keys()] ],
                         axis='columns')

is one way of converting the JSON to a DataFrame, though only tested on a sub-set of BQL queries.

@avantgardeam
Copy link
Contributor

Thanks, for sharing, @DS-London !

Sometimes, the secondaryColumns are important too, especially when it comes to historical data. Since we don't have much info on this endpoint, I've gathered everything and melted the dataframes to make it easier to concatenate later. Any feedback would be great!

import json

bquery = BlpQuery().start()

query = create_query(
    "sendQuery", 
    {"expression": "get( PX_LAST(start = -3D), CUR_MKT_CAP, PX_TO_BOOK_RATIO ) for( ['IBM US Equity', 'AAPL US Equity', 'AZUL4 BZ Equity'] )"}
)

res = bquery.query(query, parse=False, collector=list)

result = json.loads(res[0]['message']['element'])['results']

data = []
for field in result.values():
    # ID column may be a security ticker
    aux_dict = {    
        "field": field['name'],
        "id": field['idColumn']['values'],
        "value": field['valuesColumn']['values']
    }

    # Secondary columns may be DATE or CURRENCY, for example
    for secondary_column in field['secondaryColumns']:
        aux_dict[secondary_column['name']] = secondary_column['values']

    df = pandas.DataFrame(aux_dict)

    # Since we have multiple secondary columns, we need to melt the dataframe
    id_vars = ['field', 'id', 'value']
    df = df.melt(
        id_vars=id_vars, value_vars=df.columns.difference(id_vars), 
        var_name="secondary_name", value_name="secondary_value"
    ).dropna(subset=["value"])
    
    column_order = ['secondary_name', 'secondary_value', 'field', 'id', 'value']
    df = df[column_order]

    data.append(df)

final_df = pandas.concat(data)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants