N
narke
Hi,
I have a big code, and I cannot get understand some parts of it. Below
is a piece of them:
void determind_corr_value(void)
{
int8_t Leading0;
uint32_t CorrValue;
...
if( CorrValue>0)
{
// Detect bit number of MSB of CorrValue (Leading 0's are
counted)
Leading0 = 1;
while(!(*((int16_t*) &CorrValue + 1) < 0)) // MS_Bit = 0
{
CorrValue <<= 1;
Leading0++;
}
Leading0 -= 8;
// increase the resolution for compensation steps to 0.5Bit by
checking the 2nd bit
//lint -save -e701
Leading0 <<= 1;
if(*((int16_t*) &CorrValue + 1) & 0x4000)
{
Leading0--;
}
// Calculate now the correction value
if( Current < CalibrationData.I_Limit_Corr) {
// low range
Leading0-= (int8_t)Data.Leading0_Corr_Low;
CorrValue = (unsigned)(long)(32768L + ((int16_t)
(CalibrationData.Correction_Low * Leading0)));
} else {
// High range
Leading0-= (int8_t)Data.Leading0_Corr_High;
CorrValue = (unsigned)(long)(32768L + ((int16_t)
(CalibrationData.Correction_High * Leading0)));
}
} else
CorrValue = 32768;
...
}
I guess it was doing some kind of DSP, but not sure what was exactly
going on. Especially, I hope some one have a guess and give me some
hints for my below questions so far:
1. Why Leading0 was assigned as 1 at begin of the process.
2. Why 8 was substracted from the Leading0
3. What means 0.5bit and checking for 2nd bit?
4. Why 32768?
I hope the code indeed implemented some algorithm that is familiar to
some of you.
Regards,
woody
I have a big code, and I cannot get understand some parts of it. Below
is a piece of them:
void determind_corr_value(void)
{
int8_t Leading0;
uint32_t CorrValue;
...
if( CorrValue>0)
{
// Detect bit number of MSB of CorrValue (Leading 0's are
counted)
Leading0 = 1;
while(!(*((int16_t*) &CorrValue + 1) < 0)) // MS_Bit = 0
{
CorrValue <<= 1;
Leading0++;
}
Leading0 -= 8;
// increase the resolution for compensation steps to 0.5Bit by
checking the 2nd bit
//lint -save -e701
Leading0 <<= 1;
if(*((int16_t*) &CorrValue + 1) & 0x4000)
{
Leading0--;
}
// Calculate now the correction value
if( Current < CalibrationData.I_Limit_Corr) {
// low range
Leading0-= (int8_t)Data.Leading0_Corr_Low;
CorrValue = (unsigned)(long)(32768L + ((int16_t)
(CalibrationData.Correction_Low * Leading0)));
} else {
// High range
Leading0-= (int8_t)Data.Leading0_Corr_High;
CorrValue = (unsigned)(long)(32768L + ((int16_t)
(CalibrationData.Correction_High * Leading0)));
}
} else
CorrValue = 32768;
...
}
I guess it was doing some kind of DSP, but not sure what was exactly
going on. Especially, I hope some one have a guess and give me some
hints for my below questions so far:
1. Why Leading0 was assigned as 1 at begin of the process.
2. Why 8 was substracted from the Leading0
3. What means 0.5bit and checking for 2nd bit?
4. Why 32768?
I hope the code indeed implemented some algorithm that is familiar to
some of you.
Regards,
woody