Fulqrum Publishing Home   |   Register   |   Today Posts   |   Members   |   UserCP   |   Calendar   |   Search   |   FAQ

Go Back   Official Fulqrum Publishing forum > Fulqrum Publishing > IL-2 Sturmovik: Cliffs of Dover > Technical threads > FM/DM threads

FM/DM threads Everything about FM/DM in CoD

Reply
 
Thread Tools Display Modes
  #111  
Old 06-23-2011, 08:48 PM
Viper2000 Viper2000 is offline
Approved Member
 
Join Date: Mar 2011
Posts: 218
Default

Quote:
Originally Posted by Crumpp View Post
It was....

Both Daimler-Benz and BMW were forbidden from even being in the aviation market.
AFAIK they had to quit for 3 years. Thereafter they didn't get back into aerospace because they didn't have a market rather than anything else.

However, Daimler has quite a big stake in EADS, whilst BMW started a joint venture with RR to make turbofans in Germany from 1990, though now this is 100% owned by RR.

Quote:
Originally Posted by Crumpp View Post
And injecting fuel directly into the combustion chamber is even better, Viper. How hard is that to understand?
Why do you think it's better?

A supercharger is a pretty effective way to homogenise a mixture. The intake manifold is going to end up at roughly charge temperature, which for a Merlin at high power is going to be about 90ºC. You are very unlikely to see condensation of the fuel onto the manifold at that temperature. FAR will therefore be pretty constant from one end of the manifold to the other.

Charge distribution may well vary, which would modify CHT somewhat, but the same argument applies to air distribution.

FAR will become variable when supercharger delivery temperature is low, and this will affect acceleration behaviour, especially from low boost & revs. But aero-engines spend most of their time at fixed, relatively high, power settings, and so this sort of transient behaviour is far less of a problem for an aero-engine than for a car engine.

Quote:
Originally Posted by Crumpp View Post
And that it is much more efficient to realize the power gains by directly injecting fuel into the combustion chamber than it is by dumping it into an intake manifold......
If you are supercharging then you'll win by injecting into the supercharger and thereby reducing supercharger work.

The supercharger is basically adiabatic if you're not injecting fuel or water into it. However, isentropic efficiency of superchargers tends to be much lower than the isentropic efficiency of the compression stroke of a piston engine.

In any case, you're always going to gain more by reducing temperature as early in the compression process as possible, because compressors (whether steady-flow or non-flow) produce temperature ratios in exchange for pressure ratios, whilst the absolute work required for the compression process is proportional to deltaH, i.e. Cp*deltaT.

If you reduce the starting temperature then you reduce the deltaT all the way down the chain, and the benefit multiplies. Therefore, if your fuel is liquid, you really want to inject it at or before the start of the compression process in order to maximise the thermodynamic benefit associated with its latent heat of evaporation.

Clearly for a naturally aspirated engine you might as well go for direct injection, especially if the number of cylinders is small.

The cylinders & pistons are very far from being adiabatic, but are very efficient at performing compression work. The limiting factor is the rate at which they can pass non-dimensional flow through their intake & exhaust valves at any given rpm. Hence supercharging; pre-compressing the air allows you to get more absolute mass flow rate into the fixed non-dimensional mass flow capacity of the piston engine. That's the objective of the exercise.

You use a steady flow machine upstream of the unsteady flow machine because unsteady flow machines are inherently bigger than steady flow machines, and therefore you can shrink the physical size of the engine in relation to its effective flow capacity.

Quote:
Originally Posted by Crumpp View Post
No, it is attractive and if we had the technology to do it on a cost effective basis, we would have done it. It is the ultimate fuel metering method for a piston engine in terms of power and efficiency. A single point injection simply cannot maintain a stoichiometric mixture in all the cylinders. That is why the EGT and CHT will always be different in each cylinder unless you have direct fuel injection.
That's only true for naturally aspirated engines.

EGT and CHT will be different anyway because that's life; holding FAR constant is great but it's not magic; airflow into the cylinder depends upon induction manifold design and engine speed. Induction manifold design is quite a complex business, and compromises are inevitable.

DI is very useful if you want to vary non-dimensional power setting over a wide range, but this isn't so important for an aero-engine, and so the higher design-point efficiency offered by injecting into the eye of the supercharger is a pretty compelling argument, before you even consider the cost, mass and complexity advantages.

Modern GA engines are going DI because they're going CI (in order to burn Jet-A and save money), and also because they don't have a lot of cylinders, which means that the cost of injectors is inherently less important.
Reply With Quote
  #112  
Old 06-23-2011, 09:27 PM
Viper2000 Viper2000 is offline
Approved Member
 
Join Date: Mar 2011
Posts: 218
Default

Quote:
Originally Posted by Crumpp View Post
In fact in the 1950's, we started doing it.....

In the R-4360C Wasp Major power-plant with CH 9 turbo-blower.....



http://www.flightglobal.com/pdfarchi...0-%200248.html
The benefit comes from getting more work out of the turbocharger; if you cruise higher then the turbocharger has a bigger expansion ratio to work with and can therefore get more work out of the exhaust.

Charge temperature is limiting, and you'd obviously rather get your supercharger work from the turbine than the crankshaft. So you throw away the supercharger, but that means you need to either go to DI or else inject into the eye of the turbosupercharger impeller to homogenise the mixture.

The turbocharger came from GE, whilst the piston engine came from P&W.

Fuel injection into the turbocharger wasn't viable because of the fire risk, both in case of leaks between the hot and cold sides of the turbocharger, and because of the relatively long ducting from turbocharger to piston engine, which would otherwise have been full of stoichiometric mixture. But most importantly, it wasn't viable because it would have been almost impossible to start the engine unless the turbocharger was clutched to the crankshaft for that purpose, which in turn wasn't possible due to the physical separation between turbocharger and piston engine which was itself a consequence of the historical decision that GE would make turbochargers in isolation from the piston engine manufacturers.

The thermodynamic benefit comes from utilisation of exhaust enthalpy which would otherwise have gone to waste. However, there is an enthalpy loss equal to the sum of enthalpy drop across the aftercooler, and the cooling drag on the cold side thereof; if fuel had been injected upstream of the turbosupercharger, the compression process would have had a higher apparent isentropic efficiency, and the aftercooler would have had less work to do because the compressor delivery temperature would have been lower.

In essence, the benefit comes from improved matching/work balance rather than from going to DI itself. In other words, they wanted to throw away the supercharger to get more of their compressor work from the turbocharger, and this drove them to DI because they then didn't have a method to homogenise the mixture. So DI is a consequence rather than a cause.
Reply With Quote
  #113  
Old 06-23-2011, 10:50 PM
CaptainDoggles's Avatar
CaptainDoggles CaptainDoggles is offline
Approved Member
 
Join Date: Jun 2011
Posts: 1,198
Default

That's an interesting take on direct injection, Viper.

I'd always assumed DI was first introduced to more reliably control mixture, eliminate pre-ignition and get a stratified charge in CI systems.
Reply With Quote
  #114  
Old 06-24-2011, 12:31 AM
Viper2000 Viper2000 is offline
Approved Member
 
Join Date: Mar 2011
Posts: 218
Default

Quote:
Originally Posted by CaptainDoggles View Post
That's an interesting take on direct injection, Viper.

I'd always assumed DI was first introduced to more reliably control mixture, eliminate pre-ignition and get a stratified charge in CI systems.
DI was indeed initially introduced to improve mixture control and avoid the various problems associated with carburettors.

In general, a single carburettor isn't likely to give good mixture distribution to a multi-cylinder naturally aspirated piston engine, because evaporation isn't completed before the charge reaches the intake manifold, and so you get a fairly complex multi-phase flow.

However, if you've got a supercharger between the carburettor and the intake manifold, things get a lot better because the supercharger homogenises the mixture and also increases its static temperature. This means that much more, if not all of, the fuel evaporates; diffusion is then very helpful in further homogenising the mixture. Furthermore, the flow is likely to warm the induction manifold sufficiently that fuel doesn't condense upon contact with it.

This means that supercharged engines which put the fuel into the airflow (whether via a carburettor or some kind of injection system) upstream of the supercharger will tend to deliver a pretty consistent FAR to all of their cylinders. This removes one of the main motivations for direct injection.

However, it must be stressed that this is something of a special case; take the supercharger away and FAR will vary considerably from cylinder to cylinder, whereas with DI you can set FAR quite accurately.

As an aside, WWII vintage technology would just meter the fuel into the cylinders, which is an open-loop approach. You'd still get FAR variations from cylinder to cylinder because although the fuel mass injected would be the same for all cylinders, the airflow would not.

A modern car engine would use an oxygen sensor to tune the fuel mass injected into the cylinder so as to maintain stoichiometry throughout the operating range of the engine; this is vital to the operation of 3-way catalysts. However, this sort of closed-loop approach requires computers, and it is primarily driven by emissions legislation rather than engine performance (power, SFC) considerations.

Without such constraints you'd run the engine leaner to improve SFC, or richer to improve power.

Fuel injection is very useful for CI engines because injection timing can be controlled in order to control the timing of the combustion event; it also allows the pressure profile of the combustion event to be controlled.

Limiting the peak cylinder pressure allows you to make the engine lighter.

With modern engines you can also just stop fuelling cylinders in order to reduce power. This is useful because the turn-down ratio of the injectors is limited if you want good atomisation; poor atomisation leads to reduced combustion efficiency and increased emissions (especially CO and UHC). The alternative is to use multiple injectors per cylinder, but that's a pain.

(If you really really want to then you can build a CI engine with a carburettor, but it's hard work, and it tends to be difficult to control combustion in a satisfactory manner, which hurts thermal efficiency and will tend to cause vibration due to considerable cycle-to-cycle variations in engine behaviour. You'll also find that the smoke limit is set by the richest cylinder, and smoke is a factor then this inevitably limits output.)

Anyway, DI is great for mixture control, but as with all aspects of engine design, it's an option within a tradespace, rather than an unmitigated upside.

If you're mostly interested in operating at a fixed design point then the advantages of DI may easily be outweighed by other options, especially if you're supercharging heavily. OTOH, if you're making a small engine for an economical passenger car today, it's very hard to beat a turbocharged CI engine with DI.

So I'm not suggesting that carburettors are magic; what I'm saying is that DI isn't magic either. Which is better depends upon your priorities, and the job you're trying to make your engine do.

In the very specific example of R-4360C, the primary goal was to get rid of the supercharger so that more useful work could be extracted from the exhaust via the turbocharger. The removal of the supercharger then drove the design towards DI. But it's not reasonable to say that DI is thermodynamically superior; the advantage comes from the improved utilisation of exhaust enthalpy, and DI is just a tool which allows this to be done. Indeed, had it been possible to inject fuel upstream of the turbosupercharger's impeller, superior performance would have been attained.

So DI is analogous to a bridge in this case; the economic benefit comes from the traffic which the bridge carries, rather than the bridge itself, and life would have been easier & cheaper if there had been no need to build a bridge in the first place.

But I must stress again that I'm talking about big aero-engines with high degrees of supercharge here; very different conclusions would be reached if the engine in question was designed to power a car, or even a significantly smaller aeroplane.
Reply With Quote
  #115  
Old 06-24-2011, 01:45 AM
Crumpp's Avatar
Crumpp Crumpp is offline
Approved Member
 
Join Date: Feb 2008
Posts: 1,552
Default

Quote:
Why do you think it's better?
It is better Viper. An intake manifold cannot precisely meter fuel to the cylinders irregardless of having supercharger on it or not.

Have you ever flow an piston engine aircraft with individual EGT/CHT? The CHT and EGT will be vastly different if the fuel metering system is not direct injection.

Each cylinder is being meter a different amount of fuel. That means power loss just in the thermal differences!

Not to mention that none of them are a stoichiometric mixture. Add to that, it is impossible to optimize the timing advance. All of your cylinders are developing different power levels and none of them are optimal.

There is no why to precisely control how much fuel goes to each cylinder in an intake manifold.

With direct injection, you can not only optimize timing advance to the power curve, you can maintain a stoichiometric mixture. The thermal losses are eliminated because your CHT/EGT's are the same.

Quote:
In any case, you're always going to gain more by reducing temperature as early in the compression process as possible, because compressors (whether steady-flow or non-flow) produce temperature ratios in exchange for pressure ratios, whilst the absolute work required for the compression process is proportional to deltaH, i.e. Cp*deltaT.
Add to that, the fuel cools the combustion chamber much more efficiently than cold air.

A simple illustration of that basic principle.

1006 J/kgC

460 J/kgC

2100 J/kgC

To change the temperature of a mass of 1 Kg of each by 2 degrees….

Air = 1006 J/kgC * 1kg* 2 C = 2012J
Fuel = 2100J/kgC*1kg*2 C = 4200J
Steel = 460J/kgC * 1kg * 2 C = 920J

Our 4200J of fuel energy goes to cool the 15C air…

4200J * 1kg /1006J/kgC = Change in T = 4.17 C

15C - 4.17C = 10.83C

Now let us dump our fuel on the hot steel of our combustion chamber.

4200J * 1kg / 420J/kgC = Change in T = 10 C

15C - 10C = 5C

5 degrees Celsius is much colder than 10 degrees Celsius.

Last edited by Crumpp; 06-24-2011 at 01:59 AM.
Reply With Quote
  #116  
Old 06-24-2011, 01:58 AM
Crumpp's Avatar
Crumpp Crumpp is offline
Approved Member
 
Join Date: Feb 2008
Posts: 1,552
Default

Quote:
I'd always assumed DI was first introduced to more reliably control mixture, eliminate pre-ignition and get a stratified charge in CI systems.
That is correct.

Quote:
Viper says:
If you are supercharging then you'll win by injecting into the supercharger and thereby reducing supercharger work.
A supercharger is a intake system has nothing to do with the fuel metering.

Every supercharged engine has to have a fuel metering system. The Merlin for example uses a carburetor. It has a supercharger but fuel is metered by the carburetor.

Read the article you posted. Rolls Royce does not make the argument a carburetor with a supercharger is better, they argue their engines are not as inefficient as people think when compared to the direct injection used by the Germans.

The only drawback to Direct Injection is complexity and expense.
Reply With Quote
  #117  
Old 06-24-2011, 02:05 AM
Crumpp's Avatar
Crumpp Crumpp is offline
Approved Member
 
Join Date: Feb 2008
Posts: 1,552
Default

Nice article on direct injection:

http://books.google.com/books?id=byE...page&q&f=false
Reply With Quote
  #118  
Old 06-24-2011, 08:54 AM
Viper2000 Viper2000 is offline
Approved Member
 
Join Date: Mar 2011
Posts: 218
Default

Go back to the paper given by Lovesey which I posted earlier.

Engine shaft power is proportional to mass flow rate.

Mass flow rate, W, is given by

W=0.422*Ncylinders*(Pcharge-(1/6)*Pexhaust)/Tcharge

Charge temperature & pressure are measured in the intake manifold. In this equation, W is in lb/minute; piston engines are tiny.

You will note that Lovesey cites a 7% improvement in supercharger pressure ratio from injecting fuel into the eye of the supercharger.

This is very significant given the pretty awful isentropic efficiency of the supercharger.

You can go through the data, and calculate the supercharger work from the efficiency curves in Figure 11.

You can calculate the actual engine air consumption iteratively by assuming that the FAR is about 1/12 at high power. Hence, given the SFC curve and the full throttle power vs altitude curve, you can use the fuel flow to calculated the total rate of charge consumption.

You can use the total rate of charge consumption to calculate the supercharge power consumption as a function of inlet temperature. You must of course add this supercharger power consumption to the brake power in order to calculate the shaft power, because it is the shaft power, not the brake power, which is directly proportional to fuel flow.

This means that you'll need to iterate in order to achieve convergence.

Try this with and without fuel injection into the eye of the supercharger, which may be modelled as a 25 K temperature reduction exchanged for a 1/12 mass flow rate increase.

Because the supercharger work is W*Cp*deltaT, you will find that injecting fuel into the eye of the supercharger results in a considerable reduction in the supercharger power required for any given boost pressure, which naturally improves brake power and brake SFC.

Your argument regarding cylinder temperature is spurious because the engine has a cooling system to maintain CHT, and because the reduction in induction manifold temperature results in a considerable reduction in the charge temperature during the compression stroke because compression through a fixed volume ratio results in a fixed temperature ratio rather than a fixed absolute temperature increment.

This means that there is less compression work.

Peak cycle temperature is essentially fixed by dissociation, and therefore the reduction in charge temperature translates directly into an increase in BMEP.

Alternatively, you could hold constant charge temperature and reduce the size of the aftercooler. Either way, you're still getting a benefit from the latent heat of evaporation of the fuel; but this benefit is greater overall when the fuel is injected into the eye of the supercharger.

Arguments about stoichiometry as less important for an aero-engine at high power than for a car engine because you're not bothered about emissions (at least in this period). Therefore you run the whole thing rich of peak.

Cylinder to cylinder variation in FAR will be small so long as the induction manifold temperature is kept reasonably high; this may be seen from the discussion about lead fouling towards the end of Lovesey's paper; charge distribution is good down to intake manifold temperatures of about 35ºC.

Cylinder to cylinder charge consumption will vary due to intake manifold aerodynamics.

But this actually means that if you go for 1940s DI you'll get a variation in FAR from cylinder to cylinder because the injection system would give each cylinder equal fuel irrespective of its actual air consumption. Modern engines would use an oxygen sensor in the exhaust to maintain stoichiometry. However, this would preclude operations rich of stoichiometric, so the benefit is to BSFC and emissions rather than to absolute power.

I have flown a Citabria with modern after-market engine instrumentation. Obviously the CHTs vary because it's air cooled; you're never going to get the back cylinders as cool as the front ones. Likewise, mixture distribution will always be questionable for a naturally aspirated engine, even on a hot day in South Carolina.

That sort of engine would obviously benefit from DI; and in the modern world, the philosophy is to turbo-normalise if altitude performance is wanted. The convenience of maintaining a fixed engine operating point independent of altitude is considerable. Modern compressors are very much more efficient than those from the 1940s, and therefore there is less compressor work to save in the first place. Additionally, with substantial exhaust energy being dumped out of the waste-gate, there is obviously less motivation to reduce compressor work. So there are a variety of factors driving modern engines towards DI.

This does not mean that single point injection upstream of a supercharger does not have advantages, especially if you have a high degree of supercharge.

Modern piston engines simply have different design goals than the high-powered piston aero-engines of the 1940s.
Reply With Quote
  #119  
Old 06-24-2011, 09:30 AM
Kurfürst Kurfürst is offline
Approved Member
 
Join Date: Oct 2007
Posts: 705
Default

Quote:
Originally Posted by Viper2000 View Post
Because the supercharger work is W*Cp*deltaT, you will find that injecting fuel into the eye of the supercharger results in a considerable reduction in the supercharger power required for any given boost pressure, which naturally improves brake power and brake SFC.
Pretty redundant if you ask me, if you are already using a variable speed supercharger adjusted already for supercharging needs, as DB did with its barometrically controlled hydraulic clutch.. the supercharger wasn't making less waste, it made almost no waste at all.

Correct me if I am wrong, but injecting fuel into the supercharger eye reduces charge temperature, delaying detonation point. Direct fuel injection does the same (in the combustion chamber), but later, and with better fuel effiency, no risk of backfires, and no negative G problems. High octane fuel is a pretty expensive agent for charge cooling..
__________________
Il-2Bugtracker: Feature #200: Missing 100 octane subtypes of Bf 109E and Bf 110C http://www.il2bugtracker.com/issues/200
Il-2Bugtracker: Bug #415: Spitfire Mk I, Ia, and Mk II: Stability and Control http://www.il2bugtracker.com/issues/415

Kurfürst - Your resource site on Bf 109 performance! http://kurfurst.org
Reply With Quote
  #120  
Old 06-24-2011, 09:55 AM
Viper2000 Viper2000 is offline
Approved Member
 
Join Date: Mar 2011
Posts: 218
Default

You're confusing throttling losses, which are avoided by varying supercharger speed, with the aerodynamic losses associated with the supercharger's design, which are not.

The aerodynamic losses increase the temperature rise associated with a given pressure rise, which increases the work required.

Because compression through a given pressure ratio tends to produce a given temperature ratio, you reduce the absolute deltaT by reducing the initial temperature, all other factors remaining constant. This reduces compressor work, which is equivalent to an increase in compressor efficiency.

You obviously get a greater benefit from cooling the working fluid at the start of the compression process.

Eg

Start at 288 K. Cool by 25 K. T = 263 K. Compress through a temperature ratio of 1.5, and then further through a temperature ratio of 2. Temperature = 263*1.5*2 = 789 K. Delta T from start = 501 K.

Compare with:

Start at 288 K. Compress through temperature ratio 1.5, cool by 25 K and then compress through temperature ratio of 2. Final temperature is then 814 K, so delta T is 526 K.

Thus, compressor work differs by 5% or so in this example.

This is the reason for the reduction in the isentropic efficiency associated with a given polytropic efficiency as compressor pressure ratio increases.
Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT. The time now is 09:08 AM.


Powered by vBulletin® Version 3.8.4
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
Copyright © 2007 Fulqrum Publishing. All rights reserved.