-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: priority recompute restructure #279
base: main
Are you sure you want to change the base?
Conversation
14c0833
to
40cd334
Compare
ALTER TABLE public.dimensions | ||
add column position integer DEFAULT 1 NOT NULL; | ||
|
||
ALTER TABLE public.contexts |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we write a query to copy priority to weightage
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Datron we can write , but we will anyway have to re populate dimension position and then recompute
migration approach: #277 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can use LOG function to backfill
UPDATE public.contexts SET position = FLOOR(LOG(2, priority)) WHERE priority > 0;
crates/context_aware_config/migrations/2024-11-05-083200_add_priority_restructure/up.sql
Show resolved
Hide resolved
@@ -11,6 +11,7 @@ actix-http = "3.3.1" | |||
actix-web = { workspace = true } | |||
anyhow = { workspace = true } | |||
base64 = { workspace = true } | |||
bigdecimal = { version = "0.3.1" , features= ["serde"]} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need bigdecimal?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes @Datron we need to calculate higher value
@@ -38,6 +42,12 @@ use crate::db::{ | |||
}, | |||
}; | |||
|
|||
pub struct DimensionData { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
derive debug and Clone
@@ -38,6 +42,12 @@ use crate::db::{ | |||
}, | |||
}; | |||
|
|||
pub struct DimensionData { | |||
pub schema: JSONSchema, | |||
pub priority: i32, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pub priority: i32, | |
pub weight: i32, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Datron it will be priority only,
I refactored a bit here , changed from tuples to struct
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pratikmishra356 will this go away once we migrate all data and the frontend?
.get(dimension.as_str()) | ||
.map(|x| x.position) | ||
.ok_or_else(|| { | ||
let msg = String::from("Dimension not found in Dimension schema map"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we also log the name of the dimension we couldn't parse?
add column position integer DEFAULT 1 NOT NULL; | ||
|
||
ALTER TABLE public.contexts | ||
add column weightage numeric(1000,0) DEFAULT 1 NOT NULL; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we call this weight?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can , lets check with team also
@@ -527,7 +529,8 @@ async fn reduce_config( | |||
.and_then(|value| value.to_str().ok().and_then(|s| s.parse::<bool>().ok())) | |||
.unwrap_or(false); | |||
|
|||
let dimensions_schema_map = get_all_dimension_schema_map(&mut conn)?; | |||
let dimensions_vec = get_dimension_data(&mut conn)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should get_dimension_data
return a map?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No , it does not need to be , i have separated it into 2 function and get_dimension_data_map this function returns only values which are needed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but do we need the 2 functions separately ?
pub old_weightage: BigDecimal, | ||
pub new_weightage: BigDecimal, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pub old_weightage: BigDecimal, | |
pub new_weightage: BigDecimal, | |
pub old_weight: BigDecimal, | |
pub new_weight: BigDecimal, |
impl Position { | ||
fn validate_data(position_val: Option<i32>) -> Result<Self, String> { | ||
if let Some(val) = position_val { | ||
if val < 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should allow 0 right? 2^0 is 1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes we are allowing 0
286c0d0
to
ec03437
Compare
94a06eb
to
a7fba91
Compare
conn.transaction::<_, superposition::AppError, _>(|transaction_conn| { | ||
match (prev_index, new_index.clone(), req_position_val.clone()) { | ||
(Some(prev_val), Some(new_val), None) => { | ||
println!("1"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove this
@@ -38,6 +42,12 @@ use crate::db::{ | |||
}, | |||
}; | |||
|
|||
pub struct DimensionData { | |||
pub schema: JSONSchema, | |||
pub priority: i32, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pratikmishra356 will this go away once we migrate all data and the frontend?
.order(priority.asc()) | ||
.select((dimension, priority, position)) | ||
.load::<(String, i32, i32)>(&mut conn) | ||
.expect("Error loading dimensions"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No expects, we should throw DB error
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes priority field will go away.
(Some(prev_val), Some(new_val), None) => { | ||
println!("1"); | ||
if prev_val < new_val { | ||
new_dimension.position = new_val.clone() as i32 - 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we doing this check? @pratikmishra356 can we add some comments here and use better variable names to document whats happening?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry , missed this will have to write comments in code to explain this
@@ -35,6 +36,36 @@ impl TryFrom<i32> for Priority { | |||
} | |||
} | |||
|
|||
#[derive(Debug, Deserialize, AsRef, Deref, DerefMut, Into)] | |||
#[serde(try_from = "Option<i32>")] | |||
pub struct Position(Option<i32>); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any reason for wrapping Option instead of i32 directly? You can do Option then, rather than deal with None and Some everywhere
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For now from api request it should be optional till we move completely from priority
i can do either Option<Position(i32)> or Position<option)
either way i will have to handle some and None.
option will not be there post migration
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
either case, it should be Option<Position>
, where it is Position(i64)
because the parameter is optional, but currently it reads the parameter is mandatory which in turn takes optional value
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and when you would want to make it mandatory, you would want to change only the request type and not the properties of the type
a7fba91
to
6ce4b15
Compare
@@ -43,6 +43,7 @@ diesel = { version = "2.1.0", features = [ | |||
"chrono", | |||
"uuid", | |||
"postgres_backend", | |||
"numeric", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we move this feature to context-aware-config
only
ALTER TABLE public.dimensions | ||
add column position integer DEFAULT 1 NOT NULL; | ||
|
||
ALTER TABLE public.contexts |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can use LOG function to backfill
UPDATE public.contexts SET position = FLOOR(LOG(2, priority)) WHERE priority > 0;
@@ -527,7 +529,8 @@ async fn reduce_config( | |||
.and_then(|value| value.to_str().ok().and_then(|s| s.parse::<bool>().ok())) | |||
.unwrap_or(false); | |||
|
|||
let dimensions_schema_map = get_all_dimension_schema_map(&mut conn)?; | |||
let dimensions_vec = get_dimension_data(&mut conn)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but do we need the 2 functions separately ?
@@ -10,6 +10,7 @@ use crate::db::models::Dimension; | |||
pub struct CreateReq { | |||
pub dimension: DimensionName, | |||
pub priority: Priority, | |||
pub position: Position, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shouldn't the type here be Option<Position>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
get_dimension_data_map is a util function on dimensions data
so its better to have fetch dimensions function separately
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can use LOG function to backfill
UPDATE public.contexts SET position = FLOOR(LOG(2, priority)) WHERE priority > 0;
during deployment priority value might need not be of power 2
not needed i have added a new api for this
let base = BigUint::from(2u32); | ||
let result = base.pow(index); | ||
let biguint_str = &result.to_str_radix(10); | ||
BigDecimal::from_str_radix(&biguint_str, 10).map_err(|err| { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this correct ?
seems like we are sending address of address over here
biguint_str
is already storing an address and then we are passing &biguint_str
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pending review
.map_err(|err| { | ||
log::error!("failed to fetch dimensions with error: {}", err); | ||
unexpected_error!("Something went wrong") | ||
})?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we throw a DB error. Also
.map_err(|err| { | |
log::error!("failed to fetch dimensions with error: {}", err); | |
unexpected_error!("Something went wrong") | |
})?; | |
.map_err(|err| { | |
log::error!("failed to fetch dimensions with error: {}", err); | |
unexpected_error!("Could not process this request due to a database error. Please reach out to your admin") | |
})?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Small nitpick, can we resolve this quickly
ed78722
to
94f2875
Compare
94f2875
to
10b00bf
Compare
Problem
User has to declare this exponential value, as value is exponential it has limitation on the total number of dimensions that can be defined
Solution
Postgresql's Numeric Type Approach
For dimension we can have another column named order which will store order of dimension's specificity ,
the higher order is means more specific
For context , we will have another column named weightage , where we will calculate the value just like we do now
it will summation of (2^ dimension's order value)
since context's weightage value can be too large , so we will use postgresql Numeric type
CREATE TABLE large_numbers (
id SERIAL PRIMARY KEY,
big_value NUMERIC
);
Rust/ Diesel supports BigDecimal type which is compatible with Numeric type
Environment variable changes
What ENVs need to be added or changed
Pre-deployment activity
Things needed to be done before deploying this change (if any)
Post-deployment activity
link for migration/backward compatibility: #277 (comment)
API changes
Possible Issues in the future
Describe any possible issues that could occur because of this change